<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # TensorFlow 中的棧式自編碼器 在 TensorFlow 中構建棧式自編碼器模型的步驟如下: 1. 首先,定義超參數如下: ```py learning_rate = 0.001 n_epochs = 20 batch_size = 100 n_batches = int(mnist.train.num_examples/batch_size) ``` 1. 定義輸入(即特征)和輸出(即目標)的數量。輸出數量與輸入數量相同: ```py # number of pixels in the MNIST image as number of inputs n_inputs = 784 n_outputs = n_inputs ``` 1. 定義輸入和輸出圖像的占位符: ```py x = tf.placeholder(dtype=tf.float32, name="x", shape=[None, n_inputs]) y = tf.placeholder(dtype=tf.float32, name="y", shape=[None, n_outputs]) ``` 1. 添加編碼器和解碼器層的神經元數量為`[512,256,256,512]`: ```py # number of hidden layers n_layers = 2 # neurons in each hidden layer n_neurons = [512,256] # add number of decoder layers: n_neurons.extend(list(reversed(n_neurons))) n_layers = n_layers * 2 ``` 1. 定義`w`和`b`參數: ```py w=[] b=[] for i in range(n_layers): w.append(tf.Variable(tf.random_normal([n_inputs \ if i==0 else n_neurons[i-1],n_neurons[i]]), name="w_{0:04d}".format(i) ) ) b.append(tf.Variable(tf.zeros([n_neurons[i]]), name="b_{0:04d}".format(i) ) ) w.append(tf.Variable(tf.random_normal([n_neurons[n_layers-1] \ if n_layers > 0 else n_inputs,n_outputs]), name="w_out" ) ) b.append(tf.Variable(tf.zeros([n_outputs]),name="b_out")) ``` 1. 構建網絡并為每個層使用 sigmoid 激活函數: ```py # x is input layer layer = x # add hidden layers for i in range(n_layers): layer = tf.nn.sigmoid(tf.matmul(layer, w[i]) + b[i]) # add output layer layer = tf.nn.sigmoid(tf.matmul(layer, w[n_layers]) + b[n_layers]) model = layer ``` 1. 使用`mean_squared_error`定義`loss`函數,使用`AdamOptimizer`定義`optimizer`函數: ```py mse = tf.losses.mean_squared_error loss = mse(predictions=model, labels=y) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) optimizer = optimizer.minimize(loss) ``` 1. 訓練模型并預測`train`和`test`集的圖像: ```py with tf.Session() as tfs: tf.global_variables_initializer().run() for epoch in range(n_epochs): epoch_loss = 0.0 for batch in range(n_batches): X_batch, _ = mnist.train.next_batch(batch_size) feed_dict={x: X_batch,y: X_batch} _,batch_loss = tfs.run([optimizer,loss], feed_dict) epoch_loss += batch_loss if (epoch%10==9) or (epoch==0): average_loss = epoch_loss / n_batches print('epoch: {0:04d} loss = {1:0.6f}' .format(epoch,average_loss)) # predict images using trained autoencoder model Y_train_pred = tfs.run(model, feed_dict={x: train_images}) Y_test_pred = tfs.run(model, feed_dict={x: test_images}) ``` 1. 我們看到以下輸出,因為損失在 20 個周期后顯著減少: ```py epoch: 0000 loss = 0.156696 epoch: 0009 loss = 0.091367 epoch: 0019 loss = 0.078550 ``` 1. 現在模型已經過訓練,讓我們顯示訓練模型中的預測圖像。我們寫了一個輔助函數`display_images`來幫助我們顯示圖像: ```py import random # Function to display the images and labels # images should be in NHW or NHWC format def display_images(images, labels, count=0, one_hot=False): # if number of images to display is not provided, then display all the images if (count==0): count = images.shape[0] idx_list = random.sample(range(len(labels)),count) for i in range(count): plt.subplot(4, 4, i+1) plt.title(labels[i]) plt.imshow(images[i]) plt.axis('off') plt.tight_layout() plt.show() ``` 使用此函數,我們首先顯示訓練集中的四個圖像和自編碼器預測的圖像。 第一行表示實際圖像,第二行表示生成的圖像: ![](https://img.kancloud.cn/f7/32/f7329a39d7fa92afd9a980b9da42f097_783x327.png) 生成的圖像有一點點噪音,可以通過更多訓練和超參數調整來消除。現在預測訓練集圖像并不神奇,因為我們在這些圖像上訓練了自編碼器,因此它知道它們。讓我們看一下預測測試集圖像的結果。 第一行表示實際圖像,第二行表示生成的圖像: ![](https://img.kancloud.cn/77/60/77604a0678372a9ec33c97a95bc6be16_782x324.png) 哇!經過訓練的自編碼器能夠生成相同的數字,只有從 768 中學到的 256 個特征。生成的圖像中的噪聲可以通過超參數調整和更多訓練來改善。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看