<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # TensorFlow 中的變分自編碼器 變分自編碼器是自編碼器的現代生成版本。讓我們為同一個前面的問題構建一個變分自編碼器。我們將通過提供來自原始和嘈雜測試集的圖像來測試自編碼器。 我們將使用不同的編碼風格來構建此自編碼器,以便使用 TensorFlow 演示不同的編碼風格: 1. 首先定義超參數: ```py learning_rate = 0.001 n_epochs = 20 batch_size = 100 n_batches = int(mnist.train.num_examples/batch_size) # number of pixels in the MNIST image as number of inputs n_inputs = 784 n_outputs = n_inputs ``` 1. 接下來,定義參數字典以保存權重和偏差參數: ```py params={} ``` 1. 定義每個編碼器和解碼器中隱藏層的數量: ```py n_layers = 2 # neurons in each hidden layer n_neurons = [512,256] ``` 1. 變分編碼器中的新增加是我們定義潛變量`z`的維數: ```py n_neurons_z = 128 # the dimensions of latent variables ``` 1. 我們使用激活`tanh`: ```py activation = tf.nn.tanh ``` 1. 定義輸入和輸出占位符: ```py x = tf.placeholder(dtype=tf.float32, name="x", shape=[None, n_inputs]) y = tf.placeholder(dtype=tf.float32, name="y", shape=[None, n_outputs]) ``` 1. 定義輸入層: ```py # x is input layer layer = x ``` 1. 定義編碼器網絡的偏差和權重并添加層。變分自編碼器的編碼器網絡也稱為識別網絡或推理網絡或概率編碼器網絡: ```py for i in range(0,n_layers): name="w_e_{0:04d}".format(i) params[name] = tf.get_variable(name=name, shape=[n_inputs if i==0 else n_neurons[i-1], n_neurons[i]], initializer=tf.glorot_uniform_initializer() ) name="b_e_{0:04d}".format(i) params[name] = tf.Variable(tf.zeros([n_neurons[i]]), name=name ) layer = activation(tf.matmul(layer, params["w_e_{0:04d}".format(i)] ) + params["b_e_{0:04d}".format(i)] ) ``` 1. 接下來,添加潛在變量的均值和方差的層: ```py name="w_e_z_mean" params[name] = tf.get_variable(name=name, shape=[n_neurons[n_layers-1], n_neurons_z], initializer=tf.glorot_uniform_initializer() ) name="b_e_z_mean" params[name] = tf.Variable(tf.zeros([n_neurons_z]), name=name ) z_mean = tf.matmul(layer, params["w_e_z_mean"]) + params["b_e_z_mean"] name="w_e_z_log_var" params[name] = tf.get_variable(name=name, shape=[n_neurons[n_layers-1], n_neurons_z], initializer=tf.glorot_uniform_initializer() ) name="b_e_z_log_var" params[name] = tf.Variable(tf.zeros([n_neurons_z]), name="b_e_z_log_var" ) z_log_var = tf.matmul(layer, params["w_e_z_log_var"]) + params["b_e_z_log_var"] ``` 1. 接下來,定義表示與`z`方差的變量相同形狀的噪聲分布的 epsilon 變量: ```py epsilon = tf.random_normal(tf.shape(z_log_var), mean=0, stddev=1.0, dtype=tf.float32, name='epsilon' ) ``` 1. 根據均值,對數方差和噪聲定義后驗分布: ```py z = z_mean + tf.exp(z_log_var * 0.5) * epsilon ``` 1. 接下來,定義解碼器網絡的權重和偏差,并添加解碼器層。變分自編碼器中的解碼器網絡也稱為概率解碼器或生成器網絡。 ```py # add generator / probablistic decoder network parameters and layers layer = z for i in range(n_layers-1,-1,-1): name="w_d_{0:04d}".format(i) params[name] = tf.get_variable(name=name, shape=[n_neurons_z if i==n_layers-1 else n_neurons[i+1], n_neurons[i]], initializer=tf.glorot_uniform_initializer() ) name="b_d_{0:04d}".format(i) params[name] = tf.Variable(tf.zeros([n_neurons[i]]), name=name ) layer = activation(tf.matmul(layer, params["w_d_{0:04d}".format(i)]) + params["b_d_{0:04d}".format(i)]) ``` 1. 最后,定義輸出層: ```py name="w_d_z_mean" params[name] = tf.get_variable(name=name, shape=[n_neurons[0],n_outputs], initializer=tf.glorot_uniform_initializer() ) name="b_d_z_mean" params[name] = tf.Variable(tf.zeros([n_outputs]), name=name ) name="w_d_z_log_var" params[name] = tf.Variable(tf.random_normal([n_neurons[0], n_outputs]), name=name ) name="b_d_z_log_var" params[name] = tf.Variable(tf.zeros([n_outputs]), name=name ) layer = tf.nn.sigmoid(tf.matmul(layer, params["w_d_z_mean"]) + params["b_d_z_mean"]) model = layer ``` 1. 在變異自編碼器中,我們有重建損失和正則化損失。將損失函數定義為重建損失和正則化損失的總和: ```py rec_loss = -tf.reduce_sum(y * tf.log(1e-10 + model) + (1-y) * tf.log(1e-10 + 1 - model), 1) reg_loss = -0.5*tf.reduce_sum(1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var), 1) loss = tf.reduce_mean(rec_loss+reg_loss) ``` 1. 根據`AdapOptimizer`定義優化程序函數: ```py optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) .minimize(loss) ``` 1. 現在讓我們訓練模型并從非噪聲和噪聲測試圖像生成圖像: ```py with tf.Session() as tfs: tf.global_variables_initializer().run() for epoch in range(n_epochs): epoch_loss = 0.0 for batch in range(n_batches): X_batch, _ = mnist.train.next_batch(batch_size) feed_dict={x: X_batch,y: X_batch} _,batch_loss = tfs.run([optimizer,loss], feed_dict=feed_dict) epoch_loss += batch_loss if (epoch%10==9) or (epoch==0): average_loss = epoch_loss / n_batches print("epoch: {0:04d} loss = {1:0.6f}" .format(epoch,average_loss)) # predict images using autoencoder model trained Y_test_pred1 = tfs.run(model, feed_dict={x: test_images}) Y_test_pred2 = tfs.run(model, feed_dict={x: test_images_noisy}) ``` 我們得到以下輸出: ```py epoch: 0000 loss = 180.444682 epoch: 0009 loss = 106.817749 epoch: 0019 loss = 102.580904 ``` 現在讓我們顯示圖像: ```py display_images(test_images.reshape(-1,pixel_size,pixel_size),test_labels) display_images(Y_test_pred1.reshape(-1,pixel_size,pixel_size),test_labels) ``` 結果如下: ![](https://img.kancloud.cn/62/32/6232a0a5f64868fc830e074128c63681_772x315.png) ```py display_images(test_images_noisy.reshape(-1,pixel_size,pixel_size), test_labels) display_images(Y_test_pred2.reshape(-1,pixel_size,pixel_size),test_labels) ``` 結果如下: ![](https://img.kancloud.cn/14/1a/141a6afde33edc6b9d3b2780b4a64435_776x316.png) 同樣,可以通過超參數調整和增加學習量來改善結果。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看