<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # ImageNet 數據集 根據 [http://image-net.org](http://image-net.org) : ImageNet 是根據 WordNet 層次結構組織的圖像數據集。WordNet 中的每個有意義的概念,可能由多個單詞或單詞短語描述,稱為同義詞集或 synset。 ImageNet 有大約 100 K 個同義詞集,平均每個同義詞集約有 1,000 個人工注釋圖像。 ImageNet 僅存儲對圖像的引用,而圖像存儲在互聯網上的原始位置。在深度學習論文中,ImageNet-1K 是指作為 ImageNet 的**大規模視覺識別挑戰**( **ILSVRC** )的一部分發布的數據集,用于將數據集分類為 1,000 個類別: 可以在以下 URL 找到 1,000 個挑戰類別: [http://image-net.org/challenges/LSVRC/2017/browse-synsets](http://image-net.org/challenges/LSVRC/2017/browse-synsets) [http://image-net.org/challenges/LSVRC/2016/browse-synsets](http://image-net.org/challenges/LSVRC/2016/browse-synsets) [http://image-net.org/challenges/LSVRC/2015/browse-synsets](http://image-net.org/challenges/LSVRC/2015/browse-synsets) [http://image-net.org/challenges/LSVRC/2014/browse-synsets](http://image-net.org/challenges/LSVRC/2014/browse-synsets) [http://image-net.org/challenges/LSVRC/2013/browse-synsets](http://image-net.org/challenges/LSVRC/2013/browse-synsets) [http://image-net.org/challenges/LSVRC/2012/browse-synsets](http://image-net.org/challenges/LSVRC/2012/browse-synsets) [http://image-net.org/challenges/LSVRC/2011/browse-synsets](http://image-net.org/challenges/LSVRC/2011/browse-synsets) [http://image-net.org/challenges/LSVRC/2010/browse-synsets.](http://image-net.org/challenges/LSVRC/2010/browse-synsets) 我們編寫了一個自定義函數來從 Google 下載 ImageNet 標簽: ```py def build_id2label(self): base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/research/inception/inception/data/' synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url) synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url) filename, _ = urllib.request.urlretrieve(synset_url) synset_list = [s.strip() for s in open(filename).readlines()] num_synsets_in_ilsvrc = len(synset_list) assert num_synsets_in_ilsvrc == 1000 filename, _ = urllib.request.urlretrieve(synset_to_human_url) synset_to_human_list = open(filename).readlines() num_synsets_in_all_imagenet = len(synset_to_human_list) assert num_synsets_in_all_imagenet == 21842 synset2name = {} for s in synset_to_human_list: parts = s.strip().split('\t') assert len(parts) == 2 synset = parts[0] name = parts[1] synset2name[synset] = name if self.n_classes == 1001: id2label={0:'empty'} id=1 else: id2label = {} id=0 for synset in synset_list: label = synset2name[synset] id2label[id] = label id += 1 return id2label ``` 我們將這些標簽加載到我們的 Jupyter 筆記本中,如下所示: ```py ### Load ImageNet dataset for labels from datasetslib.imagenet import imageNet inet = imageNet() inet.load_data(n_classes=1000) #n_classes is 1001 for Inception models and 1000 for VGG models ``` 在 ImageNet-1K 數據集上訓練過的熱門預訓練圖像分類模型如下表所示: | **模型名稱** | **Top-1 準確率** | **Top-5 準確率** | **Top-5 錯誤率** | **原始文件的鏈接** | | --- | --- | --- | --- | --- | | AlexNet | | | 15.3% | [https://www.cs.toronto.edu/~fritz/absps/imagenet.pdf](https://www.cs.toronto.edu/~fritz/absps/imagenet.pdf) | | 盜夢空間也稱為 Inception V1 | 69.8 | 89.6 | 6.67% | [https://arxiv.org/abs/1409.4842](https://arxiv.org/abs/1409.4842) | | BN-啟-V2 也稱為 Inception V2 | 73.9 | 91.8 | 4.9% | [https://arxiv.org/abs/1502.03167](https://arxiv.org/abs/1502.03167) | | Inception v3 | 78.0 | 93.9 | 3.46% | [https://arxiv.org/abs/1512.00567](https://arxiv.org/abs/1512.00567) | | 成立 V4 | 80.2 | 95.2 | | [http://arxiv.org/abs/1602.07261](http://arxiv.org/abs/1602.07261) | | Inception-Resnet-V2 | 80.4 | 95.2 | | [http://arxiv.org/abs/1602.07261](http://arxiv.org/abs/1602.07261) | | VGG16 | 71.5 | 89.8 | 7.4% | [https://arxiv.org/abs/1409.1556](https://arxiv.org/abs/1409.1556) | | VGG19 | 71.1 | 89.8 | 7.3% | [https://arxiv.org/abs/1409.1556](https://arxiv.org/abs/1409.1556) | | ResNet V1 50 | 75.2 | 92.2 | 7.24% | [https://arxiv.org/abs/1512.03385](https://arxiv.org/abs/1512.03385) | | Resnet V1 101 | 76.4 | 92.9 | | [https://arxiv.org/abs/1512.03385](https://arxiv.org/abs/1512.03385) | | Resnet V1 152 | 76.8 | 93.2 | | [https://arxiv.org/abs/1512.03385](https://arxiv.org/abs/1512.03385) | | ResNet V2 50 | 75.6 | 92.8 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | ResNet V2 101 | 77.0 | 93.7 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | ResNet V2 152 | 77.8 | 94.1 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | ResNet V2 200 | 79.9 | 95.2 | | [https://arxiv.org/abs/1603.05027](https://arxiv.org/abs/1603.05027) | | Xception | 79.0 | 94.5 | | [https://arxiv.org/abs/1610.02357](https://arxiv.org/abs/1610.02357) | | MobileNet V1 版本 | 41.3 至 70.7 | 66.2 至 89.5 | | [https://arxiv.org/pdf/1704.04861.pdf](https://arxiv.org/pdf/1704.04861.pdf) | 在上表中,Top-1 和 Top-5 指標指的是模型在 ImageNet 驗證數據集上的表現。 Google Research 最近發布了一種名為 MobileNets 的新模型。 MobileNets 采用移動優先策略開發,犧牲了低資源使用的準確性。 MobileNets 旨在消耗低功耗并提供低延遲,以便在移動和嵌入式設備上提供更好的體驗。谷歌為 MobileNet 模型提供了 16 個預訓練好的檢查點文件,每個模型提供不同數量的參數和**乘法累加**( **MAC** )。 MAC 和參數越高,資源使用和延遲就越高。因此,您可以在更高的準確性與更高的資源使用/延遲之間進行選擇。 | **模型檢查點** | **百萬 MAC** | **百萬參數** | **Top-1 準確率** | **Top-5 準確率** | | --- | --- | --- | --- | --- | | [MobileNet_v1_1.0_224](http://download.tensorflow.org/models/mobilenet_v1_1.0_224_2017_06_14.tar.gz) | 569 | 4.24 | 70.7 | 89.5 | | [MobileNet_v1_1.0_192](http://download.tensorflow.org/models/mobilenet_v1_1.0_192_2017_06_14.tar.gz) | 418 | 4.24 | 69.3 | 88.9 | | [MobileNet_v1_1.0_160](http://download.tensorflow.org/models/mobilenet_v1_1.0_160_2017_06_14.tar.gz) | 291 | 4.24 | 67.2 | 87.5 | | [MobileNet_v1_1.0_128](http://download.tensorflow.org/models/mobilenet_v1_1.0_128_2017_06_14.tar.gz) | 186 | 4.24 | 64.1 | 85.3 | | [MobileNet_v1_0.75_224](http://http//download.tensorflow.org/models/mobilenet_v1_0.75_224_2017_06_14.tar.gz) | 317 | 2.59 | 68.4 | 88.2 | | [MobileNet_v1_0.75_192](http://download.tensorflow.org/models/mobilenet_v1_0.75_192_2017_06_14.tar.gz) | 233 | 2.59 | 67.4 | 87.3 | | [MobileNet_v1_0.75_160](http://download.tensorflow.org/models/mobilenet_v1_0.75_160_2017_06_14.tar.gz) | 162 | 2.59 | 65.2 | 86.1 | | [MobileNet_v1_0.75_128](http://download.tensorflow.org/models/mobilenet_v1_0.75_128_2017_06_14.tar.gz) | 104 | 2.59 | 61.8 | 83.6 | | [MobileNet_v1_0.50_224](http://download.tensorflow.org/models/mobilenet_v1_0.50_224_2017_06_14.tar.gz) | 150 | 1.34 | 64.0 | 85.4 | | [MobileNet_v1_0.50_192](http://download.tensorflow.org/models/mobilenet_v1_0.50_192_2017_06_14.tar.gz) | 110 | 1.34 | 62.1 | 84.0 | | [MobileNet_v1_0.50_160](http://download.tensorflow.org/models/mobilenet_v1_0.50_160_2017_06_14.tar.gz) | 77 | 1.34 | 59.9 | 82.5 | | [MobileNet_v1_0.50_128](http://download.tensorflow.org/models/mobilenet_v1_0.50_128_2017_06_14.tar.gz) | 49 | 1.34 | 56.2 | 79.6 | | [MobileNet_v1_0.25_224](http://download.tensorflow.org/models/mobilenet_v1_0.25_224_2017_06_14.tar.gz) | 41 | 0.47 | 50.6 | 75.0 | | [MobileNet_v1_0.25_192](http://download.tensorflow.org/models/mobilenet_v1_0.25_192_2017_06_14.tar.gz) | 34 | 0.47 | 49.0 | 73.6 | | [MobileNet_v1_0.25_160](http://download.tensorflow.org/models/mobilenet_v1_0.25_160_2017_06_14.tar.gz) | 21 | 0.47 | 46.0 | 70.7 | | [MobileNet_v1_0.25_128](http://download.tensorflow.org/models/mobilenet_v1_0.25_128_2017_06_14.tar.gz) | 14 | 0.47 | 41.3 | 66.2 | 有關 MobileNets 的更多信息,請訪問以下資源: [https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html](https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html) [https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md) [https://arxiv.org/pdf/1704.04861.pdf.](https://arxiv.org/pdf/1704.04861.pdf)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看