<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # 將圖節點放置在特定的計算設備上 讓我們通過定義配置對象來啟用變量放置的記錄,將`log_device_placement`屬性設置為`true`,然后將此`config`對象傳遞給會話,如下所示: ```py tf.reset_default_graph() # Define model parameters w = tf.Variable([.3], tf.float32) b = tf.Variable([-.3], tf.float32) # Define model input and output x = tf.placeholder(tf.float32) y = w * x + b config = tf.ConfigProto() config.log_device_placement=True with tf.Session(config=config) as tfs: # initialize and print the variable y tfs.run(global_variables_initializer()) print('output',tfs.run(y,{x:[1,2,3,4]})) ``` 我們在 Jupyter Notebook 控制臺中獲得以下輸出: ```py b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:0 b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:0 b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:0 w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:0 w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:0 mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0 add: (Add): /job:localhost/replica:0/task:0/device:GPU:0 w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:0 init: (NoOp): /job:localhost/replica:0/task:0/device:GPU:0 x: (Placeholder): /job:localhost/replica:0/task:0/device:GPU:0 b/initial_value: (Const): /job:localhost/replica:0/task:0/device:GPU:0 Const_1: (Const): /job:localhost/replica:0/task:0/device:GPU:0 w/initial_value: (Const): /job:localhost/replica:0/task:0/device:GPU:0 Const: (Const): /job:localhost/replica:0/task:0/device:GPU:0 ``` 因此,默認情況下,TensorFlow 會在設備上創建變量和操作節點,從而獲得最高表現。 可以使用 `tf.device()` 函數將變量和操作放在特定設備上。讓我們把圖放在 CPU 上: ```py tf.reset_default_graph() with tf.device('/device:CPU:0'): # Define model parameters w = tf.get_variable(name='w',initializer=[.3], dtype=tf.float32) b = tf.get_variable(name='b',initializer=[-.3], dtype=tf.float32) # Define model input and output x = tf.placeholder(name='x',dtype=tf.float32) y = w * x + b config = tf.ConfigProto() config.log_device_placement=True with tf.Session(config=config) as tfs: # initialize and print the variable y tfs.run(tf.global_variables_initializer()) print('output',tfs.run(y,{x:[1,2,3,4]})) ``` 在 Jupyter 控制臺中,我們看到現在變量已經放在 CPU 上,并且執行也發生在 CPU 上: ```py b: (VariableV2): /job:localhost/replica:0/task:0/device:CPU:0 b/read: (Identity): /job:localhost/replica:0/task:0/device:CPU:0 b/Assign: (Assign): /job:localhost/replica:0/task:0/device:CPU:0 w: (VariableV2): /job:localhost/replica:0/task:0/device:CPU:0 w/read: (Identity): /job:localhost/replica:0/task:0/device:CPU:0 mul: (Mul): /job:localhost/replica:0/task:0/device:CPU:0 add: (Add): /job:localhost/replica:0/task:0/device:CPU:0 w/Assign: (Assign): /job:localhost/replica:0/task:0/device:CPU:0 init: (NoOp): /job:localhost/replica:0/task:0/device:CPU:0 x: (Placeholder): /job:localhost/replica:0/task:0/device:CPU:0 b/initial_value: (Const): /job:localhost/replica:0/task:0/device:CPU:0 Const_1: (Const): /job:localhost/replica:0/task:0/device:CPU:0 w/initial_value: (Const): /job:localhost/replica:0/task:0/device:CPU:0 Const: (Const): /job:localhost/replica:0/task:0/device:CPU:0 ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看