<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # Q-Network 或深 Q 網絡(DQN)的 Q-Learning 在 DQN 中,我們將 Q-Table 替換為神經網絡(Q-Network),當我們使用探索狀態及其 Q 值連續訓練時,它將學會用最佳動作進行響應。因此,為了訓練網絡,我們需要一個存儲游戲內存的地方: 1. 使用大小為 1000 的雙端隊列實現游戲內存: ```py memory = deque(maxlen=1000) ``` 1. 接下來,構建一個簡單的隱藏層神經網絡模型,`q_nn`: ```py from keras.models import Sequential from keras.layers import Dense model = Sequential() model.add(Dense(8,input_dim=4, activation='relu')) model.add(Dense(2, activation='linear')) model.compile(loss='mse',optimizer='adam') model.summary() q_nn = model ``` Q-Network 看起來像這樣: ```py _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 8) 40 _________________________________________________________________ dense_2 (Dense) (None, 2) 18 ================================================================= Total params: 58 Trainable params: 58 Non-trainable params: 0 _________________________________________________________________ ``` 執行游戲的一集的`episode()`函數包含基于 Q-Network 的算法的以下更改: 1. 生成下一個狀態后,將狀態,操作和獎勵添加到游戲內存中: ```py action = policy(state_prev, env) obs, reward, done, info = env.step(action) state_next = discretize_state(obs,s_bounds,n_s) # add the state_prev, action, reward, state_new, done to memory memory.append([state_prev,action,reward,state_next,done]) ``` 1. 使用 bellman 函數生成并更新`q_values`以獲得最大的未來獎勵: ```py states = np.array([x[0] for x in memory]) states_next = np.array([np.zeros(4) if x[4] else x[3] for x in memory]) q_values = q_nn.predict(states) q_values_next = q_nn.predict(states_next) for i in range(len(memory)): state_prev,action,reward,state_next,done = memory[i] if done: q_values[i,action] = reward else: best_q = np.amax(q_values_next[i]) bellman_q = reward + discount_rate * best_q q_values[i,action] = bellman_q ``` 1. 訓練`q_nn`的狀態和我們從記憶中收到的`q_values`: ```py q_nn.fit(states,q_values,epochs=1,batch_size=50,verbose=0) ``` 將游戲玩法保存在內存中并使用它來訓練模型的過程在深度強化學習文獻中也稱為**記憶重放**。讓我們按照以下方式運行基于 DQN 的游戲: ```py learning_rate = 0.8 discount_rate = 0.9 explore_rate = 0.2 n_episodes = 100 experiment(env, policy_q_nn, n_episodes) ``` 我們獲得 150 的最大獎勵,您可以通過超參數調整,網絡調整以及使用折扣率和探索率的速率衰減來改進: ```py Policy:policy_q_nn, Min reward:8.0, Max reward:150.0, Average reward:41.27 ``` 我們在每一步計算和訓練模型;您可能希望在劇集之后探索將其更改為訓練。此外,您可以更改代碼以丟棄內存重放,并為返回較小獎勵的劇集再訓練模型。但是,請謹慎實現此選項,因為它可能會減慢您的學習速度,因為初始游戲會更頻繁地產生較小的獎勵。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看