<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # 定義并訓練圖以進行同步更新 如前所述,并在此處的圖中描述,在同步更新中,任務將其更新發送到參數服務器,ps 任務等待接收所有更新,聚合它們,然后更新參數。工作任務在繼續下一次計算參數更新迭代之前等待更新: ![](https://img.kancloud.cn/c8/01/c801f8b311761f09cf983e9e4e9029f1_548x258.png) 此示例的完整代碼位于`ch-15_mnist_dist_sync.py`中。建議您使用自己的數據集修改和瀏覽代碼。 對于同步更新,需要對代碼進行以下修改: 1. 優化器需要包裝在 SyncReplicaOptimizer 中。因此,在定義優化程序后,添加以下代碼: ```py # SYNC: next line added for making it sync update optimizer = tf.train.SyncReplicasOptimizer(optimizer, replicas_to_aggregate=len(workers), total_num_replicas=len(workers), ) ``` 1. 之后應該像以前一樣添加訓練操作: ```py train_op = optimizer.minimize(loss_op,global_step=global_step) ``` 1. 接下來,添加特定于同步更新方法的初始化函數定義: ```py if is_chief: local_init_op = optimizer.chief_init_op() else: local_init_op = optimizer.local_step_init_op() chief_queue_runner = optimizer.get_chief_queue_runner() init_token_op = optimizer.get_init_tokens_op() ``` 1. 使用兩個額外的初始化函數也可以不同地創建 supervisor 對象: ```py # SYNC: sv is initialized differently for sync update sv = tf.train.Supervisor(is_chief=is_chief, init_op = tf.global_variables_initializer(), local_init_op = local_init_op, ready_for_local_init_op = optimizer.ready_for_local_init_op, global_step=global_step) ``` 1. 最后,在訓練的會話塊中,我們初始化同步變量并啟動隊列運行器(如果它是主要的工作者任務): ```py # SYNC: if block added to make it sync update if is_chief: mts.run(init_token_op) sv.start_queue_runners(mts, [chief_queue_runner]) ``` 其余代碼與異步更新保持一致。 用于支持分布式訓練的 TensorFlow 庫和函數正在不斷發展。 因此,請注意添加的新函數或函數簽名的更改。 在撰寫本書的時候,我們使用了 TensorFlow 1.4。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看