<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # PyTorch 中文文檔 如果你在使用pytorch和pytorch-cn的過程中有任何問題,歡迎在issue中討論,可能你的問題也是別人的問題。 ## 翻譯進度 現在正在進行審閱任務(大家趕緊認領吧~) 第一個名字代表翻譯人,第二個代表審閱人 ### Notes - \[x\] Autograd mechanics (*ycszen*)(DL-ljw) - \[x\] CUDA semantics (*ycszen*) - \[x\] Extending PyTorch (*KeithYin*) - \[x\] Multiprocessing best practices (*ycszen*) - \[x\] Serialization semantics (*ycszen*) ### Package Reference - \[x\] torch(*koshinryuu*)(飛彥) - \[x\] torch.Tensor(*weigp*)(飛彥) - \[x\] torch.Storage(*kophy*) - \[ \] **torch.nn** - \[x\] Parameters(*KeithYin*) - \[x\] Containers(*KeithYin*) - \[x\] Convolution Layers(*yichuan9527*) - \[x\] Pooling Layers(*yichuan9527*) - \[x\] Non-linear Activations(*swordspoet*) - \[x\] Normalization layers(*XavierLin*) - \[x\] Recurrent layers(*KeithYin*)(Mosout) - \[x\] Linear layers( )(Mosout) - \[x\] Dropout layers( )(Mosout) - \[x\] Sparse layers(Mosout) - \[x\] Distance functions - \[x\] Loss functions(*KeithYin*)(DL-ljw) - \[x\] Vision layers(*KeithYin*) - \[x\] Multi-GPU layers(*KeithYin*) - \[x\] Utilities(*KeithYin*) - \[x\] torch.nn.functional - \[x\] Convolution functions(*ycszen*)(鐵血丹心) - \[x\] Pooling functions(*ycszen*)(鐵血丹心) - \[x\] Non-linear activations functions(*ycszen*) - \[x\] Normalization functions(*ycszen*) - \[x\] Linear functions(*dyl745001196*) - \[x\] Dropout functions(*dyl745001196*) - \[x\] Distance functions(*dyl745001196*) - \[x\] Loss functions(*tfygg*)(DL-ljw) - \[x\] Vision functions(*KeithYin*) - \[x\] torch.nn.init(*kophy*)(luc) - \[x\] torch.optim(*ZijunDeng*)(祁杰) - \[x\] torch.autograd(*KeithYin*)(祁杰) - \[x\] torch.multiprocessing(*songbo.han*) - \[x\] torch.legacy(*ycszen*) - \[x\] torch.cuda(*ycszen*) - \[x\] torch.utils.ffi(*ycszen*) - \[x\] torch.utils.data(*ycszen*) - \[x\] torch.utils.model\_zoo(*ycszen*) ### torchvision Reference - \[x\] torchvision (*KeithYin*) - \[x\] torchvision.datasets (*KeithYin*)(loop) - \[x\] torchvision.models (*KeithYin*) - \[x\] torchvision.transforms (*KeithYin*)(loop) - \[x\] torchvision.utils (*KeithYin*)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看