win10安装keras参考博文:https://blog.csdn.net/u010916338/article/details/83822562

  1. 数据为框架自带的数字手写体
    中间的数值为灰度值,注意灰度值和RGB值不是一个概念,灰度值是介于白和黑之间的值,表示范围0-255.可以理解成黑的程度。所以图片只需要一层就OK。
    数据集包含60000张图片,大小均为28x28.
    keras实现手写体数字识别功能的CNN

  2. 代码如下:

# -*- coding: utf-8 -*-
"""
Created on Fri Nov  9 15:50:33 2018
"""
#from __future__ import print_function
import keras
from keras.datasets import mnist    #自带手写数据集
from keras.models import Sequential  #序贯模型
from keras.layers import Dense    #全连接层
from keras.layers import Dropout  #随机失活层
from keras.layers import Flatten  #展平层,从卷积层到全连接层必须展平
from keras.layers import Conv2D   #二维卷积层,多用于图像
from keras.layers import MaxPooling2D  #最大值池化
from keras import backend as k


batch_size = 128  #一批训练样本128张图片
num_classes = 10  #有10个类别
epochs = 12   #一共迭代12轮

img_rows, img_cols = 28, 28  #图片宽,高。

(x_train, y_train),(x_test, y_test) = mnist.load_data()

#后端可以跑在不同的模型上,比如tensorflow, 不同的工具库对数据的组织形式不同。
#=======================================================================

if k.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)  
    #reshape()不明白可以参考博文:https://blog.csdn.net/u010916338/article/details/84066369
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

#=================================================================================

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)


model = Sequential()  #序贯模型,一个架子

model.add(Conv2D(32, kernel_size=(3,3), activation='relu',input_shape=input_shape))  #卷积层, 32个神经元, 卷积核3x3
model.add(Conv2D(64, (3,3), activation='relu'))  #卷积层, 64个神经元, 卷积核3x3
model.add(MaxPooling2D(pool_size=(2, 2))) #池化层
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu')) #全连接层, 128神经元
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

#编译,损失函数, 优化函数, 评价标注是准确率
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

#运行 , verbose步长
model.fit(x_train, y_train, batch_size= batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))

score = model.evaluate(x_test, y_test, verbose=0)

print('Test loss:', score[0])
print('Test accuracy:', score[1])

stop = time.time()
print(str(stop-start) + "秒")
  1. 运行效果
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
60000/60000 [==============================] - 87s 1ms/step - loss: 0.2595 - acc: 0.9187 - val_loss: 0.0619 - val_acc: 0.9802
Epoch 2/12
60000/60000 [==============================] - 82s 1ms/step - loss: 0.0928 - acc: 0.9721 - val_loss: 0.0401 - val_acc: 0.9859
Epoch 3/12
60000/60000 [==============================] - 80s 1ms/step - loss: 0.0692 - acc: 0.9795 - val_loss: 0.0336 - val_acc: 0.9883
Epoch 4/12
60000/60000 [==============================] - 81s 1ms/step - loss: 0.0550 - acc: 0.9840 - val_loss: 0.0325 - val_acc: 0.9886
Epoch 5/12
60000/60000 [==============================] - 82s 1ms/step - loss: 0.0475 - acc: 0.9854 - val_loss: 0.0336 - val_acc: 0.9877
Epoch 6/12
60000/60000 [==============================] - 82s 1ms/step - loss: 0.0434 - acc: 0.9870 - val_loss: 0.0292 - val_acc: 0.9902
Epoch 7/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0382 - acc: 0.9889 - val_loss: 0.0272 - val_acc: 0.9906
Epoch 8/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0346 - acc: 0.9896 - val_loss: 0.0257 - val_acc: 0.9916
Epoch 9/12
60000/60000 [==============================] - 85s 1ms/step - loss: 0.0310 - acc: 0.9905 - val_loss: 0.0283 - val_acc: 0.9917
Epoch 10/12
60000/60000 [==============================] - 88s 1ms/step - loss: 0.0307 - acc: 0.9905 - val_loss: 0.0277 - val_acc: 0.9906
Epoch 11/12
60000/60000 [==============================] - 86s 1ms/step - loss: 0.0276 - acc: 0.9917 - val_loss: 0.0253 - val_acc: 0.9919
Epoch 12/12
60000/60000 [==============================] - 81s 1ms/step - loss: 0.0272 - acc: 0.9921 - val_loss: 0.0266 - val_acc: 0.9919
Test loss: 0.026634473627798978
Test accuracy: 0.9919
1011.8610150814056秒