<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # 使用保護程序類保存和恢復所選變量 默認情況下,`Saver()`類將所有變量保存在圖中,但您可以通過將變量列表傳遞給`Saver()`類的構造函數來選擇要保存的變量: ```py # create saver object saver = tf.train.Saver({'weights': w}) ``` 變量名稱可以作為列表或字典傳遞。如果變量名稱作為列表傳遞,則列表中的每個變量將以其自己的名稱保存。變量也可以作為由鍵值對組成的字典傳遞,其中鍵是用于保存的名稱,值是要保存的變量的名稱。 以下是我們剛看到的示例的代碼,但這次我們只保存`w`變量的權重;保存時將其命名為`weights`: ```py # Saving selected variables in a graph in TensorFlow # Assume Linear Model y = w * x + b # Define model parameters w = tf.Variable([.3], tf.float32) b = tf.Variable([-.3], tf.float32) # Define model input and output x = tf.placeholder(tf.float32) y = w * x + b output = 0 # create saver object saver = tf.train.Saver({'weights': w}) with tf.Session() as tfs: # initialize and print the variable y tfs.run(tf.global_variables_initializer()) output = tfs.run(y,{x:[1,2,3,4]}) saved_model_file = saver.save(tfs, 'saved-models/weights-save-example.ckpt') print('Model saved in {}'.format(saved_model_file)) print('Values of variables w,b: {}{}' .format(w.eval(),b.eval())) print('output={}'.format(output)) ``` 我們得到以下輸出: ```py Model saved in saved-models/weights-save-example.ckpt Values of variables w,b: [ 0.30000001][-0.30000001] output=[ 0\. 0.30000001 0.60000002 0.90000004] ``` 檢查點文件僅保存權重而不是偏差。現在讓我們將偏差和權重初始化為零,并恢復權重。此示例的代碼在此處給出: ```py # Restoring selected variables in a graph in TensorFlow tf.reset_default_graph() # Assume Linear Model y = w * x + b # Define model parameters w = tf.Variable([0], dtype=tf.float32) b = tf.Variable([0], dtype=tf.float32) # Define model input and output x = tf.placeholder(dtype=tf.float32) y = w * x + b output = 0 # create saver object saver = tf.train.Saver({'weights': w}) with tf.Session() as tfs: b.initializer.run() saved_model_file = saver.restore(tfs, 'saved-models/weights-save-example.ckpt') print('Values of variables w,b: {}{}' .format(w.eval(),b.eval())) output = tfs.run(y,{x:[1,2,3,4]}) print('output={}'.format(output)) ``` 如您所見,這次我們必須使用`b.initializer.run()`初始化偏差。我們不使用`tfs.run(tf.global_variables_initializer())`因為它會初始化所有變量,并且不需要初始化權重,因為它們將從檢查點文件中恢復。 我們得到以下輸出,因為計算僅使用恢復的權重,而偏差設置為零: ```py INFO:tensorflow:Restoring parameters from saved-models/weights-save-example.ckpt Values of variables w,b: [ 0.30000001][ 0.] output=[ 0.30000001 0.60000002 0.90000004 1.20000005] ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看