<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                # Keras 中的變分自編碼器 在 Keras 中,構建變分自編碼器更容易,并且代碼行更少。 Keras 變分自編碼器最好使用函數式風格構建。到目前為止,我們已經使用了在 Keras 中構建模型的順序樣式,現在在這個例子中,我們將看到在 Keras 中構建 VAE 模型的函數式風格。在 Keras 建立 VAE 的步驟如下: 1. 定義隱藏層和潛在變量層中的超參數和神經元數量: ```py import keras from keras.layers import Lambda, Dense, Input, Layer from keras.models import Model from keras import backend as K learning_rate = 0.001 batch_size = 100 n_batches = int(mnist.train.num_examples/batch_size) # number of pixels in the MNIST image as number of inputs n_inputs = 784 n_outputs = n_inputs # number of hidden layers n_layers = 2 # neurons in each hidden layer n_neurons = [512,256] # the dimensions of latent variables n_neurons_z = 128 ``` 1. 構建輸入層: ```py x = Input(shape=(n_inputs,), name='input') ``` 1. 構建編碼器層,以及潛在變量的均值和方差層: ```py # build encoder layer = x for i in range(n_layers): layer = Dense(units=n_neurons[i], activation='relu',name='enc_{0}'.format(i))(layer) z_mean = Dense(units=n_neurons_z,name='z_mean')(layer) z_log_var = Dense(units=n_neurons_z,name='z_log_v')(layer) ``` 1. 創建噪聲和后驗分布: ```py # noise distribution epsilon = K.random_normal(shape=K.shape(z_log_var), mean=0,stddev=1.0) # posterior distribution z = Lambda(lambda zargs: zargs[0] + K.exp(zargs[1] * 0.5) * epsilon, name='z')([z_mean,z_log_var]) ``` 1. 添加解碼器層: ```py # add generator / probablistic decoder network layers layer = z for i in range(n_layers-1,-1,-1): layer = Dense(units=n_neurons[i], activation='relu', name='dec_{0}'.format(i))(layer) ``` 1. 定義最終輸出層: ```py y_hat = Dense(units=n_outputs, activation='sigmoid', name='output')(layer) ``` 1. 最后,從輸入層和輸出層定義模型并顯示模型摘要: ```py model = Model(x,y_hat) model.summary() ``` 我們看到以下摘要: ```py _________________________________________________________________________ Layer (type) Output Shape Param # Connected to ========================================================================= input (InputLayer) (None, 784) 0 _________________________________________________________________________ enc_0 (Dense) (None, 512) 401920 input[0][0] _________________________________________________________________________ enc_1 (Dense) (None, 256) 131328 enc_0[0][0] _________________________________________________________________________ z_mean (Dense) (None, 128) 32896 enc_1[0][0] _________________________________________________________________________ z_log_v (Dense) (None, 128) 32896 enc_1[0][0] _________________________________________________________________________ z (Lambda) (None, 128) 0 z_mean[0][0] z_log_v[0][0] _________________________________________________________________________ dec_1 (Dense) (None, 256) 33024 z[0][0] _________________________________________________________________________ dec_0 (Dense) (None, 512) 131584 dec_1[0][0] _________________________________________________________________________ output (Dense) (None, 784) 402192 dec_0[0][0] ========================================================================= Total params: 1,165,840 Trainable params: 1,165,840 Non-trainable params: 0 _________________________________________________________________________ ``` 1. 定義一個計算重建和正則化損失之和的函數: ```py def vae_loss(y, y_hat): rec_loss = -K.sum(y * K.log(1e-10 + y_hat) + (1-y) * K.log(1e-10 + 1 - y_hat), axis=-1) reg_loss = -0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) loss = K.mean(rec_loss+reg_loss) return loss ``` 1. 使用此損失函數來編譯模型: ```py model.compile(loss=vae_loss, optimizer=keras.optimizers.Adam(lr=learning_rate)) ``` 1. 讓我們訓練 50 個周期的模型并預測圖像,正如我們在前面的部分中所做的那樣: ```py n_epochs=50 model.fit(x=X_train_noisy,y=X_train,batch_size=batch_size, epochs=n_epochs,verbose=0) Y_test_pred1 = model.predict(test_images) Y_test_pred2 = model.predict(test_images_noisy) ``` 讓我們顯示結果圖像: ```py display_images(test_images.reshape(-1,pixel_size,pixel_size),test_labels) display_images(Y_test_pred1.reshape(-1,pixel_size,pixel_size),test_labels) ``` 我們得到如下結果: ![](https://img.kancloud.cn/6f/09/6f094aad7ea17d5c0584fc1aab4e005b_785x315.png) ```py display_images(test_images_noisy.reshape(-1,pixel_size,pixel_size), test_labels) display_images(Y_test_pred2.reshape(-1,pixel_size,pixel_size),test_labels) ``` 我們得到以下結果: ![](https://img.kancloud.cn/14/1a/141a6afde33edc6b9d3b2780b4a64435_776x316.png) 這很棒!!生成的圖像更清晰,更清晰。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看