TensorFlow2_200729系列---5、手写数字识别实例

TensorFlow2_200729系列---5、手写数字识别实例

一、总结

一句话总结:

手写数字识别实例,神经网络的层是不断降维的过程。
model = keras.Sequential([ 
    # 28*28=784 784到512的降维 
    # 相当于公式 h1 = relu(w1*x+b)
    layers.Dense(512, activation=\'relu\'),
    # 这一层 512维降到256维
    # 相当于公式 h2 = relu(w2*h1+b)
    layers.Dense(256, activation=\'relu\'),
    # 这一层 256维降到10维
    # 相当于公式 h3 = relu(w3*h2+b)
    layers.Dense(10)])

1、本题计算梯度特点?

本题计算梯度用的手动计算梯度(反映原理)中的tensorflow自动计算梯度(简便)
# w\' = w - lr * (∂(loss)/∂(w))
# 这里的梯度下降自动计算梯度
# 你需要告诉它要算哪些参数即可:w1, w2, w3, b1, b2, b3
optimizer = optimizers.SGD(learning_rate=0.001)

# 一次数据集的一次训练看做epoch
# 对一个batch的一次训练叫做step
def train_epoch(epoch):
    # Step4.loop
    for step, (x, y) in enumerate(train_dataset):
        with tf.GradientTape() as tape:
            # 打平
            # [b, 28, 28] => [b, 784]
            x = tf.reshape(x, (-1, 28*28))
            # Step1. compute output
            # [b, 784] => [b, 10]
            out = model(x)
            # Step2. compute loss
            loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0]

        # Step3. optimize and update w1, w2, w3, b1, b2, b3
        # w\' = w - lr * (∂(loss)/∂(w))
        # 这里的梯度下降自动计算梯度
        # 你需要告诉它要算哪些参数即可:w1, w2, w3, b1, b2, b3
        # model出来之后,这些参数其实也出来了,所以可以不用自己指定
        grads = tape.gradient(loss, model.trainable_variables)
        # w\' = w - lr * grad
        optimizer.apply_gradients(zip(grads, model.trainable_variables))

        if step % 100 == 0:
            print(epoch, step, \'loss:\', loss.numpy())

二、手写数字识别实例

博客对应课程的视频位置:

import  os
os.environ[\'TF_CPP_MIN_LOG_LEVEL\']=\'2\'

import  tensorflow as tf
from    tensorflow import keras
from    tensorflow.keras import layers, optimizers, datasets


# 加载数据集
# 前面的(x, y)是train data,后面的(x_val, y_val)是test data
(x, y), (x_val, y_val) = datasets.mnist.load_data() 

# 图片数据归一化
x = tf.convert_to_tensor(x, dtype=tf.float32) / 255.
y = tf.convert_to_tensor(y, dtype=tf.int32)

# 将编码方式从普通编码转换成one hot编码
y = tf.one_hot(y, depth=10)
print(x.shape, y.shape)

# 一次完成多张图片
train_dataset = tf.data.Dataset.from_tensor_slices((x, y))
# 一批做200张图片
train_dataset = train_dataset.batch(200)


model = keras.Sequential([ 
    # 28*28=784 784到512的降维 
    # 相当于公式 h1 = relu(w1*x+b)
    layers.Dense(512, activation=\'relu\'),
    # 这一层 512维降到256维
    # 相当于公式 h2 = relu(w2*h1+b)
    layers.Dense(256, activation=\'relu\'),
    # 这一层 256维降到10维
    # 相当于公式 h3 = relu(w3*h2+b)
    layers.Dense(10)])

# 这里的dense 是 dense connection,是全连接层的意思

# layers.Dense(*args, **kwargs)
# Just your regular densely-connected NN layer.

# w\' = w - lr * (∂(loss)/∂(w))
# 这里的梯度下降自动计算梯度
# 你需要告诉它要算哪些参数即可:w1, w2, w3, b1, b2, b3
optimizer = optimizers.SGD(learning_rate=0.001)

# 一次数据集的一次训练看做epoch
# 对一个batch的一次训练叫做step
def train_epoch(epoch):
    # Step4.loop
    for step, (x, y) in enumerate(train_dataset):
        with tf.GradientTape() as tape:
            # 打平
            # [b, 28, 28] => [b, 784]
            x = tf.reshape(x, (-1, 28*28))
            # Step1. compute output
            # [b, 784] => [b, 10]
            out = model(x)
            # Step2. compute loss
            loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0]

        # Step3. optimize and update w1, w2, w3, b1, b2, b3
        # w\' = w - lr * (∂(loss)/∂(w))
        # 这里的梯度下降自动计算梯度
        # 你需要告诉它要算哪些参数即可:w1, w2, w3, b1, b2, b3
        # model出来之后,这些参数其实也出来了,所以可以不用自己指定
        grads = tape.gradient(loss, model.trainable_variables)
        # w\' = w - lr * grad
        optimizer.apply_gradients(zip(grads, model.trainable_variables))

        if step % 100 == 0:
            print(epoch, step, \'loss:\', loss.numpy())



def train():
    # 对数据集迭代30次
    for epoch in range(30):
        train_epoch(epoch)


if __name__ == \'__main__\':
    train()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 2s 0us/step
(60000, 28, 28) (60000, 10)
0 0 loss: 3.3389544
0 100 loss: 0.99554664
0 200 loss: 0.76981944
1 0 loss: 0.6603369
1 100 loss: 0.7108637
1 200 loss: 0.5877112
2 0 loss: 0.5447052
2 100 loss: 0.6146103
2 200 loss: 0.512538
3 0 loss: 0.49019763
3 100 loss: 0.5615486
3 200 loss: 0.46750215
4 0 loss: 0.45584533
4 100 loss: 0.52499497
4 200 loss: 0.43569344
5 0 loss: 0.4305521
5 100 loss: 0.4971992
5 200 loss: 0.4114254
6 0 loss: 0.41039884
6 100 loss: 0.47500992
6 200 loss: 0.39178276
7 0 loss: 0.39374748
7 100 loss: 0.4563693
7 200 loss: 0.3753728
8 0 loss: 0.37956467
8 100 loss: 0.4402595
8 200 loss: 0.36122167
9 0 loss: 0.36719117
9 100 loss: 0.42604983
9 200 loss: 0.34887043
10 0 loss: 0.35621747
10 100 loss: 0.4134293
10 200 loss: 0.33777386
11 0 loss: 0.34637833
11 100 loss: 0.40202332
11 200 loss: 0.3278202
12 0 loss: 0.33742538
12 100 loss: 0.3917761
12 200 loss: 0.3188835
13 0 loss: 0.3291529
13 100 loss: 0.38248432
13 200 loss: 0.31084573
14 0 loss: 0.32147563
14 100 loss: 0.37405777
14 200 loss: 0.30358365
15 0 loss: 0.31438777
15 100 loss: 0.36635682
15 200 loss: 0.29688656
16 0 loss: 0.30776486
16 100 loss: 0.3592253
16 200 loss: 0.29066932
17 0 loss: 0.30156362
17 100 loss: 0.35257262
17 200 loss: 0.28497207
18 0 loss: 0.295793
18 100 loss: 0.34628797
18 200 loss: 0.27970806
19 0 loss: 0.29037684
19 100 loss: 0.34034425
19 200 loss: 0.27476168
20 0 loss: 0.28527
20 100 loss: 0.33481598
20 200 loss: 0.27008528
21 0 loss: 0.28051612
21 100 loss: 0.3296093
21 200 loss: 0.2656795
22 0 loss: 0.2760051
22 100 loss: 0.32469085
22 200 loss: 0.2615463
23 0 loss: 0.27176425
23 100 loss: 0.32004726
23 200 loss: 0.25765526
24 0 loss: 0.26772726
24 100 loss: 0.315659
24 200 loss: 0.2539712
25 0 loss: 0.26389423
25 100 loss: 0.31144077
25 200 loss: 0.25048226
26 0 loss: 0.26024354
26 100 loss: 0.307407
26 200 loss: 0.24717937
27 0 loss: 0.25676757
27 100 loss: 0.30356416
27 200 loss: 0.24401997
28 0 loss: 0.2534467
28 100 loss: 0.29989263
28 200 loss: 0.24102727
29 0 loss: 0.25027734
29 100 loss: 0.29641452
29 200 loss: 0.23817524