<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # 3.10 多層感知機的簡潔實現 下面我們使用Gluon來實現上一節中的多層感知機。首先導入所需的包或模塊。 ``` python import torch from torch import nn from torch.nn import init import numpy as np import sys sys.path.append("..") import d2lzh_pytorch as d2l ``` ## 3.10.1 定義模型 和softmax回歸唯一的不同在于,我們多加了一個全連接層作為隱藏層。它的隱藏單元個數為256,并使用ReLU函數作為激活函數。 ``` python num_inputs, num_outputs, num_hiddens = 784, 10, 256 net = nn.Sequential( d2l.FlattenLayer(), nn.Linear(num_inputs, num_hiddens), nn.ReLU(), nn.Linear(num_hiddens, num_outputs), ) for params in net.parameters(): init.normal_(params, mean=0, std=0.01) ``` ## 3.10.2 讀取數據并訓練模型 我們使用與3.7節中訓練softmax回歸幾乎相同的步驟來讀取數據并訓練模型。 > 注:由于這里使用的是PyTorch的SGD而不是d2lzh_pytorch里面的sgd,所以就不存在3.9節那樣學習率看起來很大的問題了。 ``` python batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size) loss = torch.nn.CrossEntropyLoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.5) num_epochs = 5 d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer) ``` 輸出: ``` epoch 1, loss 0.0030, train acc 0.712, test acc 0.744 epoch 2, loss 0.0019, train acc 0.823, test acc 0.821 epoch 3, loss 0.0017, train acc 0.844, test acc 0.842 epoch 4, loss 0.0015, train acc 0.856, test acc 0.842 epoch 5, loss 0.0014, train acc 0.864, test acc 0.818 ``` ## 小結 * 通過PyTorch可以更簡潔地實現多層感知機。 ----------- > 注:本節除了代碼之外與原書基本相同,[原書傳送門](https://zh.d2l.ai/chapter_deep-learning-basics/mlp-gluon.html)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看