<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # 使用 Keras 的用于 MNIST 的 LeNet CNN 讓我們重新審視具有相同數據集的相同 LeNet 架構,以在 Keras 中構建和訓練 CNN 模型: 1. 導入所需的 Keras 模塊: ```py import keras from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D, Dense, Flatten, Reshape from keras.optimizers import SGD ``` 1. 定義每個層的過濾器數量: ```py n_filters=[32,64] ``` 1. 定義其他超參數: ```py learning_rate = 0.01 n_epochs = 10 batch_size = 100 ``` 1. 定義順序模型并添加層以將輸入數據重新整形為形狀`(n_width,n_height,n_depth)`: ```py model = Sequential() model.add(Reshape(target_shape=(n_width,n_height,n_depth), input_shape=(n_inputs,)) ) ``` 1. 使用 4 x 4 內核過濾器,`SAME`填充和`relu`激活添加第一個卷積層: ```py model.add(Conv2D(filters=n_filters[0],kernel_size=4, padding='SAME',activation='relu') ) ``` 1. 添加區域大小為 2 x 2 且步長為 2 x 2 的池化層: ```py model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2))) ``` 1. 以與添加第一層相同的方式添加第二個卷積和池化層: ```py model.add(Conv2D(filters=n_filters[1],kernel_size=4, padding='SAME',activation='relu') ) model.add(MaxPooling2D(pool_size=(2,2),strides=(2,2))) ``` 1. 添加層以展平第二個層的輸出和 1024 個神經元的完全連接層,以處理展平的輸出: ```py model.add(Flatten()) model.add(Dense(units=1024, activation='relu')) ``` 1. 使用`softmax`激活添加最終輸出層: ```py model.add(Dense(units=n_outputs, activation='softmax')) ``` 1. 使用以下代碼查看模型摘要: ```py model.summary() ``` 該模型描述如下: ```py Layer (type) Output Shape Param # ================================================================= reshape_1 (Reshape) (None, 28, 28, 1) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 28, 28, 32) 544 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 14, 14, 64) 32832 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 7, 7, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 3136) 0 _________________________________________________________________ dense_1 (Dense) (None, 1024) 3212288 _________________________________________________________________ dense_2 (Dense) (None, 10) 10250 ================================================================= Total params: 3,255,914 Trainable params: 3,255,914 Non-trainable params: 0 _________________________________________________________________ ``` 1. 編譯,訓練和評估模型: ```py model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=learning_rate), metrics=['accuracy']) model.fit(X_train, Y_train,batch_size=batch_size, epochs=n_epochs) score = model.evaluate(X_test, Y_test) print('\nTest loss:', score[0]) print('Test accuracy:', score[1]) ``` 我們得到以下輸出: ```py Epoch 1/10 55000/55000 [===================] - 267s - loss: 0.8854 - acc: 0.7631 Epoch 2/10 55000/55000 [===================] - 272s - loss: 0.2406 - acc: 0.9272 Epoch 3/10 55000/55000 [===================] - 267s - loss: 0.1712 - acc: 0.9488 Epoch 4/10 55000/55000 [===================] - 295s - loss: 0.1339 - acc: 0.9604 Epoch 5/10 55000/55000 [===================] - 278s - loss: 0.1112 - acc: 0.9667 Epoch 6/10 55000/55000 [===================] - 279s - loss: 0.0957 - acc: 0.9714 Epoch 7/10 55000/55000 [===================] - 316s - loss: 0.0842 - acc: 0.9744 Epoch 8/10 55000/55000 [===================] - 317s - loss: 0.0758 - acc: 0.9773 Epoch 9/10 55000/55000 [===================] - 285s - loss: 0.0693 - acc: 0.9790 Epoch 10/10 55000/55000 [===================] - 217s - loss: 0.0630 - acc: 0.9804 Test loss: 0.0628845927377 Test accuracy: 0.9785 ``` 準確性的差異可歸因于我們在這里使用 SGD 優化器這一事實,它沒有實現我們用于 TensorFlow 模型的`AdamOptimizer`提供的一些高級功能。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看