<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # 長短期記憶(LSTM) ## 定義方式如下 ~~~ { # 層名 "name": "lstm_1", # 層類型,lstm網絡中,圖片高作為序列個數,圖片寬作為序列長度, 暫時只支持 N->1 模式 "type": "lstm", # 神經元個數 "neurons_number": 60, # 權重初始化方式 msra/gaussian/xavier "weight_init": "xavier" }, ~~~ ## 參數說明: name:層名稱,無限制,隨便取 type:層類型,必須為 `lstm`,區分大小寫 neurons_number:神經元個數 weight_init:權重初始化方式 ,必須為:`msra/xavier/gaussian`之一 * msra:何凱明初始化,主要針對relu激活函數 * xavier,Xavier初始化,主要針對tanh激活函數 * gaussian:高斯初始化,均值為0,方差為0.01的高斯分布 > LSTM暫時只支持 N->1 模式 了解更多:https://www.cnblogs.com/pinard/p/6519110.html ## 完整例子 ~~~ # lstm 識別 mnist 數據集 # pip install AADeepLearning from AADeepLearning import AADeepLearning from AADeepLearning.datasets import mnist from AADeepLearning.datasets import np_utils # mnist數據集已經被劃分成了60,000個訓練集,10,000個測試集的形式,如果數據不存在則自動下載 # x_train,x_test 第一個維度是樣本數目,第二維度是高,第三個維度是寬 (x_train, y_train), (x_test, y_test) = mnist.load_data() # 將x_train, x_test的數據格式轉為float32 x_train = x_train.astype('float32') x_test = x_test.astype('float32') # 歸一化,將值映射到 0到1區間 x_train /= 255 x_test /= 255 # 因為是10分類,所以將類別向量(從0到10的整數向量)映射為二值類別矩陣,相當于將向量用one-hot重新編碼 y_train = np_utils.to_categorical(y_train, 10) y_test = np_utils.to_categorical(y_test, 10) # 網絡配置文件 config = { # 初始學習率 "learning_rate": 0.001, # 優化策略: sgd/momentum/rmsprop/adam "optimizer": "adam", # 訓練多少次 "number_iteration": 2000, # 每次用多少個樣本訓練 "batch_size": 64, # 迭代多少次打印一次信息 "display": 100, } # 網絡結構,數據將從上往下傳播 net = [ { # 層名 "name": "lstm_1", # 層類型,lstm網絡中,圖片高作為序列個數,圖片寬作為序列長度, 暫時只支持 N->1 模式 "type": "lstm", # 神經元個數 "neurons_number": 60, # 權重初始化方式 msra/gaussian/xavier "weight_init": "xavier" }, { # 層名 "name": "relu_1", # 層類型 "type": "relu" }, { # 層名 "name": "fully_connected_2", # 層類型,全連接層, "type": "fully_connected", # 神經元個數, 因為是10分類,所以神經元個數為10 "neurons_number": 10, # 權重初始化方式 msra/xavier/gaussian "weight_init": "msra" }, { # 層名 "name": "softmax_1", # 層類型,分類層,最終輸出十分類的概率分布 "type": "softmax" } ] # 定義模型,傳入網絡結構和配置項 AA = AADeepLearning(net=net, config=config) # 訓練模型 AA.train(x_train=x_train, y_train=y_train) # 使用測試集預測,返回概率分布和準確率, score:樣本在各個分類上的概率, accuracy:準確率 score, accuracy = AA.predict(x_test=x_test, y_test=y_test) print("test set accuracy:", accuracy) ~~~
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看