<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??碼云GVP開源項目 12k star Uniapp+ElementUI 功能強大 支持多語言、二開方便! 廣告
                # Keras 中的棧式自編碼器 現在讓我們在 Keras 中構建相同的自編碼器。 我們使用以下命令清除筆記本中的圖,以便我們可以構建一個新圖,該圖不會占用上一個會話或圖中的任何內存: `tf.reset_default_graph()` `keras.backend.clear_session()` 1. 首先,我們導入 keras 庫并定義超參數和層: ```py import keras from keras.layers import Dense from keras.models import Sequential learning_rate = 0.001 n_epochs = 20 batch_size = 100 n_batches = int(mnist.train.num_examples/batch_sizee # number of pixels in the MNIST image as number of inputs n_inputs = 784 n_outputs = n_i # number of hidden layers n_layers = 2 # neurons in each hidden layer n_neurons = [512,256] # add decoder layers: n_neurons.extend(list(reversed(n_neurons))) n_layers = n_layers * 2 ``` 1. 接下來,我們構建一個順序模型并為其添加密集層。對于更改,我們對隱藏層使用`relu`激活,為最終層使用`linear`激活: ```py model = Sequential() # add input to first layer model.add(Dense(units=n_neurons[0], activation='relu', input_shape=(n_inputs,))) for i in range(1,n_layers): model.add(Dense(units=n_neurons[i], activation='relu')) # add last layer as output layer model.add(Dense(units=n_outputs, activation='linear')) ``` 1. 現在讓我們顯示模型摘要以查看模型的外觀: ```py model.summary() ``` 該模型在五個密集層中共有 1,132,816 個參數: ```py _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 512) 401920 _________________________________________________________________ dense_2 (Dense) (None, 256) 131328 _________________________________________________________________ dense_3 (Dense) (None, 256) 65792 _________________________________________________________________ dense_4 (Dense) (None, 512) 131584 _________________________________________________________________ dense_5 (Dense) (None, 784) 402192 ================================================================= Total params: 1,132,816 Trainable params: 1,132,816 Non-trainable params: 0 _________________________________________________________________ ``` 1. 讓我們用上一個例子中的均方損失編譯模型: ```py model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=learning_rate), metrics=['accuracy']) model.fit(X_train, X_train,batch_size=batch_size, epochs=n_epochs) ``` 在 20 個周期,我們能夠獲得 0.0046 的損失,相比之前我們得到的 0.078550: ```py Epoch 1/20 55000/55000 [==========================] - 18s - loss: 0.0193 - acc: 0.0117 Epoch 2/20 55000/55000 [==========================] - 18s - loss: 0.0087 - acc: 0.0139 ... ... ... Epoch 20/20 55000/55000 [==========================] - 16s - loss: 0.0046 - acc: 0.0171 ``` 現在讓我們預測并顯示模型生成的訓練和測試圖像。第一行表示實際圖像,第二行表示生成的圖像。以下是 t 降雨設置圖像: ![](https://img.kancloud.cn/f5/3b/f53b793b9ca67d140df99fcd4e2da04f_773x319.png) 以下是測試集圖像: ![](https://img.kancloud.cn/c7/31/c7311c6eccd925ca6248778467557e7e_770x316.png) 這是我們在能夠從 256 個特征生成圖像時實現的非常好的準確性。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看