<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                # 用于 MNIST 分類的基于 TFLearn 的 MLP 現在讓我們看看如何使用 TFLearn 實現相同的 MLP,TFLearn 是 TensorFlow 的另一個高級庫: 1. 導入 TFLearn 庫: ```py import tflearn ``` 1. 定義超參數(我們假設數據集已經加載到`X_train`,`Y_train`,`X_test`和`Y_test`變量): ```py num_layers = 2 num_neurons = [] for i in range(num_layers): num_neurons.append(256) learning_rate = 0.01 n_epochs = 50 batch_size = 100 ``` 1. 構建輸入層,兩個隱藏層和輸出層(與 TensorFlow 和 Keras 部分中的示例相同) ```py # Build deep neural network input_layer = tflearn.input_data(shape=[None, num_inputs]) dense1 = tflearn.fully_connected(input_layer, num_neurons[0], activation='relu') dense2 = tflearn.fully_connected(dense1, num_neurons[1], activation='relu') softmax = tflearn.fully_connected(dense2, num_outputs, activation='softmax') ``` 1. 使用最后一步中構建的 DNN(在變量`softmax`中)定義優化器函數,神經網絡和 MLP 模型(在 TFLearn 中稱為 DNN): ```py optimizer = tflearn.SGD(learning_rate=learning_rate) net = tflearn.regression(softmax, optimizer=optimizer, metric=tflearn.metrics.Accuracy(), loss='categorical_crossentropy') model = tflearn.DNN(net) ``` 1. 訓練模型: ```py model.fit(X_train, Y_train, n_epoch=n_epochs, batch_size=batch_size, show_metric=True, run_id="dense_model") ``` 訓練結束后,我們得到以下輸出: ```py Training Step: 27499 | total loss: 0.11236 | time: 5.853s | SGD | epoch: 050 | loss: 0.11236 - acc: 0.9687 -- iter: 54900/55000 Training Step: 27500 | total loss: 0.11836 | time: 5.863s | SGD | epoch: 050 | loss: 0.11836 - acc: 0.9658 -- iter: 55000/55000 -- ``` 1. 評估模型并打印準確率分數: ```py score = model.evaluate(X_test, Y_test) print('Test accuracy:', score[0]) ``` 我們得到以下輸出: ```py Test accuracy: 0.9637 ``` 與使用 TFLearn 相比,我們獲得了相當的精確度。 在筆記本 `ch-05_MLP` 中提供了使用 TFLearn 進行 MNIST 分類的 MLP 的完整代碼。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看