http://blog.csdn.net/xiaojiajia007/article/details/72865764

https://stackoverflow.com/questions/42112260/how-do-i-use-the-tensorboard-callback-of-keras

https://www.tensorflow.org/get_started/summaries_and_tensorboard

直接上代码

tb_cb=keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1, write_graph=True, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None)
es_cb=EarlyStopping(monitor='val_loss', min_delta=0.09,patience=5, verbose=0, mode='auto')
cbks=[];
cbks.append(tb_cb);
cbks.append(es_cb);

model.fit(x_train, y_train,batch_size=batch_size,epochs=epochs,verbose=1,callbacks=cbks,validation_data=(x_test, y_test))

 

需要查看的时候,在命令行窗口  tensorboard --logdir=C:\Users\Alexander\logs(这里是log_dir的位置)

然后在浏览器输入  http://localhost:6006,可以查看

 

Basically, histogram_freq=2 is the most important parameter to tune when calling this callback: it sets an interval of epochs to call the callback, with the goal of generating fewer files on disks.

Second, I removed write_images=True since at each histogram_freq epoch it would consume more than 5 GB on disks to save those images, and the time to save those images is just outstandingly long on this convolutional neural network that has (just) 22 convolutional layers.