<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # 使用 Q-Table 進行 Q-Learning 您可以在`ch-13b.ipynb`中按照本節的代碼進行操作。 由于我們的離散空間的尺寸為[10,10,10,10],因此我們的 Q 表的尺寸為[10,10,10,10,2]: ```py # create a Q-Table of shape (10,10,10,10, 2) representing S X A -> R q_table = np.zeros(shape = np.append(n_s,n_a)) ``` 我們根據`exploration_rate`定義了一個利用或探索的 Q-Table 策略: ```py def policy_q_table(state, env): # Exploration strategy - Select a random action if np.random.random() < explore_rate: action = env.action_space.sample() # Exploitation strategy - Select the action with the highest q else: action = np.argmax(q_table[tuple(state)]) return action ``` 定義運行單個劇集的`episode()`函數,如下所示: 1. 首先初始化變量和第一個狀態: ```py obs = env.reset() state_prev = discretize_state(obs,s_bounds,n_s) episode_reward = 0 done = False t = 0 ``` 1. 選擇操作并觀察下一個狀態: ```py action = policy(state_prev, env) obs, reward, done, info = env.step(action) state_new = discretize_state(obs,s_bounds,n_s) ``` 1. 更新 Q 表: ```py best_q = np.amax(q_table[tuple(state_new)]) bellman_q = reward + discount_rate * best_q indices = tuple(np.append(state_prev,action)) q_table[indices] += learning_rate*( bellman_q - q_table[indices]) ``` 1. 將下一個狀態設置為上一個狀態,并將獎勵添加到劇集的獎勵中: ```py state_prev = state_new episode_reward += reward ``` `experiment()`函數調用劇集函數并累積報告獎勵。您可能希望修改該函數以檢查連續獲勝以及特定于您的游戲或游戲的其他邏輯: ```py # collect observations and rewards for each episode def experiment(env, policy, n_episodes,r_max=0, t_max=0): rewards=np.empty(shape=[n_episodes]) for i in range(n_episodes): val = episode(env, policy, r_max, t_max) rewards[i]=val print('Policy:{}, Min reward:{}, Max reward:{}, Average reward:{}' .format(policy.__name__, np.min(rewards), np.max(rewards), np.mean(rewards))) ``` 現在,我們要做的就是定義參數,例如`learning_rate`,`discount_rate`和`explore_rate`,并運行`experiment()`函數,如下所示: ```py learning_rate = 0.8 discount_rate = 0.9 explore_rate = 0.2 n_episodes = 1000 experiment(env, policy_q_table, n_episodes) ``` 對于 1000 集,基于我們的簡單實現,基于 Q-Table 的策略的最大獎勵為 180: ```py Policy:policy_q_table, Min reward:8.0, Max reward:180.0, Average reward:17.592 ``` 我們對算法的實現很容易解釋。但是,您可以對代碼進行 od odify 以將探索率設置為最初,然后隨著時間步長的過去而衰減。同樣,您還可以實現學習和折扣率的衰減邏輯。讓我們看看,由于我們的 Q 函數學得更快,我們是否可以用更少的劇集獲得更高的獎勵。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看