<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                ## 背景 在深度學習的世界中,Vision Transformer (ViT) 模型因其在圖像分類任務中的卓越表現而受到廣泛關注。然而,ViT 模型通常使用 Python 編寫,尤其是基于 PyTorch 框架的實現。對于 PHP 開發者來說,利用 PHP 來實現 ViT 模型可能看似不切實際,但借助`phpy`擴展,我們可以輕松地在 PHP 中調用 Python 的生態系統,從而實現這一目標。 ## ViT模型特點 ViT模型主要應用于圖像分類領域。因此,其模型結構相較于傳統的Transformer有以下幾個特點: * 數據集的原圖像被劃分為多個Patch后,通過Patch Embedding將二維Patch(不考慮channel)轉換為一維向量,再加上類別向量與位置向量作為模型輸入。 * 模型主體的Block結構是基于Transformer的Encoder結構,但是調整了Normalization的位置,其中,最主要的結構依然是Multi-head Attention結構。 * 模型在Blocks堆疊后接全連接層,接受類別向量的輸出作為輸入并用于分類。通常情況下,我們將最后的全連接層稱為Head,Transformer Encoder部分為backbone。 ViT模型利用Transformer模型在處理上下文語義信息的優勢,將圖像轉換為一種`變種詞向量`然后進行處理,而這種轉換的意義在于,多個Patch之間本身具有空間聯系,這類似于一種“空間語義”,從而獲得了比較好的處理效果。 ## 什么是phpy? `phpy`是一個 PHP 擴展,允許 PHP 調用 Python 模塊。這意味著我們可以在 PHP 中使用 Python 的強大功能和庫,而不需要切換編程語言。這對于需要在 PHP 項目中使用深度學習模型的開發者來說,具有重要意義。 ## 代碼實現 安裝依賴包 ``` pip install torch ``` 安裝日志 ```ts Collecting torch Downloading torch-2.4.0-cp39-cp39-manylinux1_x86_64.whl (797.2 MB) |████████████████████████████████| 797.2 MB 55 kB/s Collecting nvidia-cufft-cu12==11.0.2.54 Downloading nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB) |████████████████████████████████| 121.6 MB 4.2 MB/s Collecting filelock Downloading filelock-3.15.4-py3-none-any.whl (16 kB) Collecting jinja2 Downloading jinja2-3.1.4-py3-none-any.whl (133 kB) |████████████████████████████████| 133 kB 6.7 MB/s Collecting nvidia-cuda-cupti-cu12==12.1.105 Downloading nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB) |████████████████████████████████| 14.1 MB 922 kB/s Collecting nvidia-nvtx-cu12==12.1.105 Downloading nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB) |████████████████████████████████| 99 kB 1.1 MB/s Collecting nvidia-cudnn-cu12==9.1.0.70 Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl (664.8 MB) |████████████████████████████████| 664.8 MB 6.0 kB/s Collecting typing-extensions>=4.8.0 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB) Collecting fsspec Downloading fsspec-2024.6.1-py3-none-any.whl (177 kB) |████████████████████████████████| 177 kB 4.6 MB/s Collecting nvidia-cusolver-cu12==11.4.5.107 Downloading nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB) |████████████████████████████████| 124.2 MB 1.5 MB/s Collecting nvidia-cusparse-cu12==12.1.0.106 Downloading nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB) |████████████████████████████████| 196.0 MB 3.7 MB/s Collecting nvidia-cublas-cu12==12.1.3.1 Downloading nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB) |████████████████████████████████| 410.6 MB 3.0 kB/s Collecting networkx Downloading networkx-3.2.1-py3-none-any.whl (1.6 MB) |████████████████████████████████| 1.6 MB 841 kB/s Collecting triton==3.0.0 Downloading triton-3.0.0-1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (209.4 MB) |████████████████████████████████| 209.4 MB 157 kB/s Collecting sympy Downloading sympy-1.13.2-py3-none-any.whl (6.2 MB) |████████████████████████████████| 6.2 MB 1.1 MB/s Collecting nvidia-curand-cu12==10.3.2.106 Downloading nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB) |████████████████████████████████| 56.5 MB 279 kB/s Collecting nvidia-nccl-cu12==2.20.5 Downloading nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB) |████████████████████████████████| 176.2 MB 45 kB/s Collecting nvidia-cuda-nvrtc-cu12==12.1.105 Downloading nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB) |████████████████████████████████| 23.7 MB 1.8 MB/s Collecting nvidia-cuda-runtime-cu12==12.1.105 Downloading nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB) |████████████████████████████████| 823 kB 2.8 MB/s Collecting nvidia-nvjitlink-cu12 Downloading nvidia_nvjitlink_cu12-12.6.68-py3-none-manylinux2014_x86_64.whl (19.7 MB) |████████████████████████████████| 19.7 MB 711 kB/s Collecting MarkupSafe>=2.0 Downloading MarkupSafe-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB) Collecting mpmath<1.4,>=1.1.0 Downloading mpmath-1.3.0-py3-none-any.whl (536 kB) |████████████████████████████████| 536 kB 2.4 MB/s Installing collected packages: nvidia-nvjitlink-cu12, nvidia-cusparse-cu12, nvidia-cublas-cu12, mpmath, MarkupSafe, filelock, typing-extensions, triton, sympy, nvidia-nvtx-cu12, nvidia-nccl-cu12, nvidia-cusolver-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, networkx, jinja2, fsspec, torch Successfully installed MarkupSafe-2.1.5 filelock-3.15.4 fsspec-2024.6.1 jinja2-3.1.4 mpmath-1.3.0 networkx-3.2.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.6.68 nvidia-nvtx-cu12-12.1.105 sympy-1.13.2 torch-2.4.0 triton-3.0.0 typing-extensions-4.12.2 ``` ### PHP代碼實現 讓我們首先來看一下如何在 PHP 中實現 ViT 模型。以下是一個完整的 PHP 實現,利用了`phpy`擴展來調用 PyTorch: ```php <?php declare(strict_types=1); /** * Vit類定義了Vision Transformer(ViT)的結構 */ class Vit { // 定義模型的各類屬性,包括嵌入大小、patch大小、patch數量,以及各種層和token private mixed $emb_size; private int $patch_size; private int $patch_count; private $conv; private $patch_emb; private $cls_token; private $pos_emb; private $tranformer_enc; private $cls_linear; private $torch; // 存儲導入的torch模塊 private $nn; // 存儲導入的nn模塊 /** * 構造函數初始化模型參數和層結構 * @param int $emb_size 嵌入的大小,默認為16 */ public function __construct($emb_size = 16) { $this->torch = PyCore::import('torch'); $this->nn = PyCore::import('torch.nn'); $this->emb_size = $emb_size; $this->patch_size = 4; $this->patch_count = intdiv(28, $this->patch_size); // 計算patch的數量 $this->conv = $this->nn->Conv2d( in_channels: 1, out_channels: pow($this->patch_size, 2), kernel_size: $this->patch_size, padding: 0, stride: $this->patch_size, ); $this->patch_emb = $this->nn->Linear(pow($this->patch_size, 2), $this->emb_size); $this->cls_token = $this->torch->randn([1, 1, $this->emb_size]); $this->pos_emb = $this->torch->randn([1, pow($this->patch_count, 2) + 1, $this->emb_size]); $encoder_layer = $this->nn->TransformerEncoderLayer($this->emb_size, 2, dim_feedforward: 2 * $this->emb_size, dropout: 0.1, activation: 'relu', layer_norm_eps: 1e-5, batch_first: true ); $this->tranformer_enc = $this->nn->TransformerEncoder($encoder_layer, 3); $this->cls_linear = $this->nn->Linear($this->emb_size, 10); } /** * 定義模型的前向傳播過程 * @param mixed $x 輸入的數據 * @return mixed 模型的輸出 */ public function forward($x) { $operator = \PyCore::import('operator'); $x = $this->conv->forward($x); $batch_size = $x->size(0); $out_channels = $x->size(1); $height = $x->size(2); $width = $x->size(3); $x = $x->view($batch_size, $out_channels, $height * $width); $x = $x->permute([0, 2, 1]); $x = $this->patch_emb->forward($x); $cls_token = $this->cls_token->expand([$x->size(0), 1, $x->size(2)]); $x = $this->torch->cat([$cls_token, $x], 1); $x = $operator->__add__($x, $this->pos_emb); $x = $this->tranformer_enc->forward($x); return $this->cls_linear->forward($x->select(1, 0)); } } // 導入torch庫,用于后續的深度學習操作 $torch = PyCore::import('torch'); // 初始化ViT模型實例,ViT是一種Vision Transformer模型 $vit = new ViT(); // 生成一個隨機輸入張量,形狀為(5, 1, 28, 28),通常用于模擬一批圖像數據 $x = $torch->rand(5, 1, 28, 28); // 通過ViT模型的forward方法對輸入$x進行前向傳播,得到輸出$y $y = $vit->forward($x); // 打印前向傳播的結果 PyCore::print($y); ``` 執行代碼打印結果 ``` # php ViT.php tensor([[ 1.4124e-01, -2.2445e-01, -4.8343e-02, 1.0453e+00, 2.6407e-01, -1.0721e+00, -4.5355e-01, 9.3695e-01, 2.0814e-01, -6.9242e-01], [ 1.3197e-01, -1.7860e-01, -3.5619e-02, 1.0052e+00, 3.5701e-01, -1.0619e+00, -5.5952e-01, 8.9957e-01, 2.2079e-01, -7.3373e-01], [ 7.5269e-04, -1.9265e-01, -2.2268e-02, 9.1797e-01, 4.4237e-01, -9.6516e-01, -5.3235e-01, 1.0040e+00, 1.9907e-01, -8.6913e-01], [ 4.2739e-02, -1.3659e-01, -1.5089e-01, 9.2313e-01, 2.9609e-01, -1.0178e+00, -3.7121e-01, 9.5373e-01, 1.0967e-01, -7.0122e-01], [-1.1353e-01, -4.2927e-02, -5.9407e-02, 1.1204e+00, 1.0559e-01, -1.1278e+00, -2.8934e-01, 1.0370e+00, 2.5948e-01, -8.9551e-01]], grad_fn=<AddmmBackward0>) ``` 在上面的代碼中,`phpy`的`PyCore::import()`方法使我們能夠在 PHP 中導入和使用 PyTorch 模塊,例如`torch`和`torch.nn`。這一點使得 PHP 可以直接調用 Python 代碼,從而實現復雜的深度學習模型。 例如:在構建 ViT 模型的過程中,我們在 PHP 中使用了 Python 的`torch.nn.Conv2d`和`torch.nn.Linear`等模塊,這些模塊在深度學習模型的構建中起到了至關重要的作用。 ### Python代碼實現 對比`vit.py`中的 Python 代碼,PHP 版本的代碼結構和邏輯幾乎完全相同。這歸功于`phpy`讓我們能夠在 PHP 中直接使用 Python 的語法和模塊,實現了代碼的高度一致性和可移植性。 ```python from torch import nn import torch class ViT(nn.Module): def __init__(self, emb_size=16): super().__init__() self.patch_size = 4 self.patch_count = 28 // self.patch_size self.conv = nn.Conv2d(in_channels=1, out_channels=self.patch_size ** 2, kernel_size=self.patch_size, padding=0, stride=self.patch_size) self.patch_emb = nn.Linear(in_features=self.patch_size ** 2, out_features=emb_size) self.cls_token = nn.Parameter(torch.rand(1, 1, emb_size)) self.pos_emb = nn.Parameter(torch.rand(1, self.patch_count ** 2 + 1, emb_size)) self.tranformer_enc = nn.TransformerEncoder( nn.TransformerEncoderLayer(d_model=emb_size, nhead=2, batch_first=True), num_layers=3) self.cls_linear = nn.Linear(in_features=emb_size, out_features=10) def forward(self, x): x = self.conv(x) x = x.view(x.size(0), x.size(1), self.patch_count ** 2) x = x.permute(0, 2, 1) x = self.patch_emb(x) cls_token = self.cls_token.expand(x.size(0), 1, x.size(2)) x = torch.cat((cls_token, x), dim=1) x = self.pos_emb + x y = self.tranformer_enc(x) return self.cls_linear(y[:, 0, :]) if __name__ == '__main__': vit = ViT() x = torch.rand(5, 1, 28, 28) y = vit(x) print(y.shape) ``` ## phpy的應用場景與意義 在實際開發中,PHP 常用于 Web 開發,但缺乏原生的深度學習支持。這使得在 PHP 項目中實現復雜的機器學習模型成為一項挑戰。然而,借助`phpy`,我們可以直接調用 Python 的深度學習框架(如 PyTorch 和 TensorFlow),從而將復雜的 AI 算法無縫集成到 PHP 應用中。 這不僅擴展了 PHP 的應用范圍,也為 PHP 開發者提供了更多的可能性。例如,在構建需要實時預測或復雜計算的 Web 應用時,PHP 開發者可以直接利用 Python 的豐富生態系統,而不必重新實現這些算法。 ## 結語 通過`phpy`,我們可以輕松地在 PHP 中實現 Python 的深度學習模型,如 ViT。這不僅展示了 PHP 的靈活性,也為 PHP 開發者打開了通往深度學習領域的大門。在未來,隨著 AI 技術的不斷發展,PHP 與 Python 的結合將為開發者帶來更多創新的機會。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看