<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                # 10個實踐應用機器學習的標準數據集 > 原文: [https://machinelearningmastery.com/standard-machine-learning-datasets/](https://machinelearningmastery.com/standard-machine-learning-datasets/) 熟練應用機器學習的關鍵是在許多不同的數據集上練習。 這是因為每個問題都不同,需要略有不同的數據準備和建模方法。 在這篇文章中,您將發現可用于練習的10個頂級標準機器學習數據集。 讓我們潛入。 * **更新Mar / 2018** :添加了備用鏈接以下載Pima Indians和Boston Housing數據集,因為原件似乎已被刪除。 * **2002年2月更新**:對保險數據集的預期默認RMSE進行小幅更新。 ## 概觀 ### 結構化方法 每個數據集都以一致的方式匯總。這使得它們易于比較和導航,以便您練習特定的數據準備技術或建模方法。 您需要了解的有關每個數據集的方面是: 1. **名稱**:如何引用數據集。 2. **問題類型**:問題是回歸還是分類。 3. **輸入和輸出**:輸入和輸出功能的編號和已知名稱。 4. **表現**:使用零規則算法進行比較的基線表現,以及最佳已知表現(如果已知)。 5. **示例**:前5行原始數據的快照。 6. **鏈接**:您可以在哪里下載數據集并了解更多信息。 ### 標準數據集 以下是我們將介紹的10個數據集的列表。 每個數據集都足夠小,可以放入內存并在電子表格中查看。所有數據集都包含表格數據和沒有(明確)缺失值。 1. 瑞典汽車保險數據集。 2. 葡萄酒質量數據集。 3. 皮馬印第安人糖尿病數據集。 4. 聲納數據集。 5. 鈔票數據集。 6. 鳶尾花數據集。 7. 鮑魚數據集。 8. 電離層數據集。 9. 小麥種子數據集。 10. 波士頓房價格數據集。 ## 1.瑞典汽車保險數據集 根據索賠總數,瑞典汽車保險數據集涉及預測數千瑞典克朗的所有索賠的總付款額。 這是一個回歸問題。它由63個觀察值組成,包含1個輸入變量和1個輸出變量。變量名稱如下: 1. 索賠數量。 2. 數千瑞典克朗的所有索賠的總付款額。 預測平均值的基線表現是大約81,000克朗的RMSE。 下面列出了前5行的樣本。 ```py 108,392.5 19,46.2 13,15.7 124,422.2 40,119.4 ``` 下面是整個數據集的散點圖。 ![Swedish Auto Insurance Dataset](https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2016/10/Swedish-Insurance-Dataset.png) 瑞典汽車保險數據集 * [下載](https://www.math.muni.cz/~kolacek/docs/frvs/M7222/data/AutoInsurSweden.txt) * [更多信息](http://college.cengage.com/mathematics/brase/understandable_statistics/7e/students/datasets/slr/frames/slr06.html) ## 2.葡萄酒質量數據集 葡萄酒質量數據集涉及根據每種葡萄酒的化學測量標準預測白葡萄酒的質量。 這是一個多類別的分類問題,但也可能被定為回歸問題。每個班級的觀察數量不均衡。有4,898個觀測值,包含11個輸入變量和一個輸出變量。變量名稱如下: 1. 固定酸度。 2. 揮發性酸度。 3. 檸檬酸。 4. 剩余的糖。 5. 氯化物。 6. 游離二氧化硫。 7. 二氧化硫總量。 8. 密度。 9. pH值。 10. 硫酸鹽。 11. 醇。 12. 質量(得分在0到10之間)。 預測平均值的基線表現是約0.148質量點的RMSE。 下面列出了前5行的樣本。 ```py 7,0.27,0.36,20.7,0.045,45,170,1.001,3,0.45,8.8,6 6.3,0.3,0.34,1.6,0.049,14,132,0.994,3.3,0.49,9.5,6 8.1,0.28,0.4,6.9,0.05,30,97,0.9951,3.26,0.44,10.1,6 7.2,0.23,0.32,8.5,0.058,47,186,0.9956,3.19,0.4,9.9,6 7.2,0.23,0.32,8.5,0.058,47,186,0.9956,3.19,0.4,9.9,6 ``` * [下載](http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv) * [更多信息](http://archive.ics.uci.edu/ml/datasets/Wine+Quality) ## 3.皮馬印第安人糖尿病數據集 皮馬印第安人糖尿病數據集涉及在Pima印第安人中根據醫療細節預測5年內糖尿病的發病。 這是一個二元(2級)分類問題。每個班級的觀察數量不均衡。有768個觀測值,有8個輸入變量和1個輸出變量。缺失值被認為是用零值編碼的。變量名稱如下: 1. 懷孕次數。 2. 口服葡萄糖耐量試驗中血漿葡萄糖濃度為2小時。 3. 舒張壓(mm Hg)。 4. 肱三頭肌皮褶厚度(mm)。 5. 2小時血清胰島素(μU/ ml)。 6. 體重指數(體重kg /(身高m)^ 2)。 7. 糖尿病譜系功能。 8. 年齡(歲)。 9. 類變量(0或1)。 預測最普遍類別的基線表現是大約65%的分類準確度。最佳結果實現了大約77%的分類準確度。 下面列出了前5行的樣本。 ```py 6,148,72,35,0,33.6,0.627,50,1 1,85,66,29,0,26.6,0.351,31,0 8,183,64,0,0,23.3,0.672,32,1 1,89,66,23,94,28.1,0.167,21,0 0,137,40,35,168,43.1,2.288,33,1 ``` * [下載](https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data)(更新:[從這里下載](https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv)) * [更多信息](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) * [熱門成績](http://www.is.umk.pl/projects/datasets.html#Diabetes) ## 4.聲納數據集 聲納數據集涉及在不同角度給出聲納返回強度的情況下預測物體是礦井還是巖石。 這是一個二元(2級)分類問題。每個班級的觀察數量不均衡。有208個觀測值,包含60個輸入變量和1個輸出變量。變量名稱如下: 1. 聲納以不同的角度返回 2. ... 3. 等級(M代表我的,R代表搖滾) 預測最普遍類別的基線表現是大約53%的分類準確度。最佳結果實現了大約88%的分類準確度。 下面列出了前5行的樣本。 ```py 0.0200,0.0371,0.0428,0.0207,0.0954,0.0986,0.1539,0.1601,0.3109,0.2111,0.1609,0.1582,0.2238,0.0645,0.0660,0.2273,0.3100,0.2999,0.5078,0.4797,0.5783,0.5071,0.4328,0.5550,0.6711,0.6415,0.7104,0.8080,0.6791,0.3857,0.1307,0.2604,0.5121,0.7547,0.8537,0.8507,0.6692,0.6097,0.4943,0.2744,0.0510,0.2834,0.2825,0.4256,0.2641,0.1386,0.1051,0.1343,0.0383,0.0324,0.0232,0.0027,0.0065,0.0159,0.0072,0.0167,0.0180,0.0084,0.0090,0.0032,R 0.0453,0.0523,0.0843,0.0689,0.1183,0.2583,0.2156,0.3481,0.3337,0.2872,0.4918,0.6552,0.6919,0.7797,0.7464,0.9444,1.0000,0.8874,0.8024,0.7818,0.5212,0.4052,0.3957,0.3914,0.3250,0.3200,0.3271,0.2767,0.4423,0.2028,0.3788,0.2947,0.1984,0.2341,0.1306,0.4182,0.3835,0.1057,0.1840,0.1970,0.1674,0.0583,0.1401,0.1628,0.0621,0.0203,0.0530,0.0742,0.0409,0.0061,0.0125,0.0084,0.0089,0.0048,0.0094,0.0191,0.0140,0.0049,0.0052,0.0044,R 0.0262,0.0582,0.1099,0.1083,0.0974,0.2280,0.2431,0.3771,0.5598,0.6194,0.6333,0.7060,0.5544,0.5320,0.6479,0.6931,0.6759,0.7551,0.8929,0.8619,0.7974,0.6737,0.4293,0.3648,0.5331,0.2413,0.5070,0.8533,0.6036,0.8514,0.8512,0.5045,0.1862,0.2709,0.4232,0.3043,0.6116,0.6756,0.5375,0.4719,0.4647,0.2587,0.2129,0.2222,0.2111,0.0176,0.1348,0.0744,0.0130,0.0106,0.0033,0.0232,0.0166,0.0095,0.0180,0.0244,0.0316,0.0164,0.0095,0.0078,R 0.0100,0.0171,0.0623,0.0205,0.0205,0.0368,0.1098,0.1276,0.0598,0.1264,0.0881,0.1992,0.0184,0.2261,0.1729,0.2131,0.0693,0.2281,0.4060,0.3973,0.2741,0.3690,0.5556,0.4846,0.3140,0.5334,0.5256,0.2520,0.2090,0.3559,0.6260,0.7340,0.6120,0.3497,0.3953,0.3012,0.5408,0.8814,0.9857,0.9167,0.6121,0.5006,0.3210,0.3202,0.4295,0.3654,0.2655,0.1576,0.0681,0.0294,0.0241,0.0121,0.0036,0.0150,0.0085,0.0073,0.0050,0.0044,0.0040,0.0117,R 0.0762,0.0666,0.0481,0.0394,0.0590,0.0649,0.1209,0.2467,0.3564,0.4459,0.4152,0.3952,0.4256,0.4135,0.4528,0.5326,0.7306,0.6193,0.2032,0.4636,0.4148,0.4292,0.5730,0.5399,0.3161,0.2285,0.6995,1.0000,0.7262,0.4724,0.5103,0.5459,0.2881,0.0981,0.1951,0.4181,0.4604,0.3217,0.2828,0.2430,0.1979,0.2444,0.1847,0.0841,0.0692,0.0528,0.0357,0.0085,0.0230,0.0046,0.0156,0.0031,0.0054,0.0105,0.0110,0.0015,0.0072,0.0048,0.0107,0.0094,R ``` * [下載](https://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data) * [更多信息](https://archive.ics.uci.edu/ml/datasets/Connectionist+Bench+(Sonar,+Mines+vs.+Rocks)) * [熱門成績](http://www.is.umk.pl/projects/datasets.html#Sonar) ## 5.鈔票數據集 鈔票數據集涉及根據從照片中采取的一些措施來預測給定鈔票是否真實。 這是一個二元(2級)分類問題。每個班級的觀察數量不均衡。有1,372個觀測值,包含4個輸入變量和1個輸出變量。變量名稱如下: 1. 小波變換圖像的方差(連續)。 2. 小波的偏斜變換圖像(連續)。 3. 小波變換圖像的峰度(連續)。 4. 圖像的熵(連續)。 5. 類(0表示真實,1表示不真實)。 預測最普遍類別的基線表現是大約50%的分類準確度。 下面列出了前5行的樣本。 ```py 3.6216,8.6661,-2.8073,-0.44699,0 4.5459,8.1674,-2.4586,-1.4621,0 3.866,-2.6383,1.9242,0.10645,0 3.4566,9.5228,-4.0112,-3.5944,0 0.32924,-4.4552,4.5718,-0.9888,0 4.3684,9.6718,-3.9606,-3.1625,0 ``` * [下載](http://archive.ics.uci.edu/ml/machine-learning-databases/00267/data_banknote_authentication.txt) * [更多信息](http://archive.ics.uci.edu/ml/datasets/banknote+authentication) ## 6.鳶尾花數據集 鳶尾花數據集涉及在測量虹膜花的情況下預測花種。 這是一個多類別的分類問題。每個班級的觀察數量是平衡的。有150個觀測值,包含4個輸入變量和1個輸出變量。變量名稱如下: 1. 萼片長度(cm)。 2. 萼片寬度(cm)。 3. 花瓣長度以厘米為單位。 4. 花瓣寬度以厘米為單位。 5. 班級(Iris Setosa,Iris Versicolour,Iris Virginica)。 預測最普遍類別的基線表現是大約26%的分類準確度。 下面列出了前5行的樣本。 ```py 5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 4.7,3.2,1.3,0.2,Iris-setosa 4.6,3.1,1.5,0.2,Iris-setosa 5.0,3.6,1.4,0.2,Iris-setosa ``` * [下載](http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data) * [更多信息](http://archive.ics.uci.edu/ml/datasets/Iris) ## 7.鮑魚數據集 鮑魚數據集涉及根據個人的客觀測量來預測鮑魚的年齡。 這是一個多類別的分類問題,但也可以作為回歸框架。每個班級的觀察數量不均衡。有4,177個觀測值,有8個輸入變量和1個輸出變量。變量名稱如下: 1. 性別(M,F,I)。 2. 長度。 3. 直徑。 4. 高度。 5. 整體重量。 6. 去掉了重量。 7. 內臟重量。 8. 殼重量。 9. 戒指。 預測最普遍類別的基線表現是大約16%的分類準確度。預測平均值的基線表現是大約3.2環的RMSE。 下面列出了前5行的樣本。 ```py M,0.455,0.365,0.095,0.514,0.2245,0.101,0.15,15 M,0.35,0.265,0.09,0.2255,0.0995,0.0485,0.07,7 F,0.53,0.42,0.135,0.677,0.2565,0.1415,0.21,9 M,0.44,0.365,0.125,0.516,0.2155,0.114,0.155,10 I,0.33,0.255,0.08,0.205,0.0895,0.0395,0.055,7 ``` * [下載](http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data) * [更多信息](http://archive.ics.uci.edu/ml/datasets/Abalone) ## 8.電離層數據集 電離層數據集需要預測大氣中的結構,因為雷達回波目標是電離層中的自由電子。 這是一個二元(2級)分類問題。每個班級的觀察數量不均衡。共有351個觀測值,包含34個輸入變量和1個輸出變量。變量名稱如下: 1. 17對雷達返回數據。 2. ... 3. 等級(g代表好,b代表壞)。 預測最普遍類別的基線表現是大約64%的分類準確度。最佳結果實現了大約94%的分類準確度。 下面列出了前5行的樣本。 ```py 1,0,0.99539,-0.05889,0.85243,0.02306,0.83398,-0.37708,1,0.03760,0.85243,-0.17755,0.59755,-0.44945,0.60536,-0.38223,0.84356,-0.38542,0.58212,-0.32192,0.56971,-0.29674,0.36946,-0.47357,0.56811,-0.51171,0.41078,-0.46168,0.21266,-0.34090,0.42267,-0.54487,0.18641,-0.45300,g 1,0,1,-0.18829,0.93035,-0.36156,-0.10868,-0.93597,1,-0.04549,0.50874,-0.67743,0.34432,-0.69707,-0.51685,-0.97515,0.05499,-0.62237,0.33109,-1,-0.13151,-0.45300,-0.18056,-0.35734,-0.20332,-0.26569,-0.20468,-0.18401,-0.19040,-0.11593,-0.16626,-0.06288,-0.13738,-0.02447,b 1,0,1,-0.03365,1,0.00485,1,-0.12062,0.88965,0.01198,0.73082,0.05346,0.85443,0.00827,0.54591,0.00299,0.83775,-0.13644,0.75535,-0.08540,0.70887,-0.27502,0.43385,-0.12062,0.57528,-0.40220,0.58984,-0.22145,0.43100,-0.17365,0.60436,-0.24180,0.56045,-0.38238,g 1,0,1,-0.45161,1,1,0.71216,-1,0,0,0,0,0,0,-1,0.14516,0.54094,-0.39330,-1,-0.54467,-0.69975,1,0,0,1,0.90695,0.51613,1,1,-0.20099,0.25682,1,-0.32382,1,b 1,0,1,-0.02401,0.94140,0.06531,0.92106,-0.23255,0.77152,-0.16399,0.52798,-0.20275,0.56409,-0.00712,0.34395,-0.27457,0.52940,-0.21780,0.45107,-0.17813,0.05982,-0.35575,0.02309,-0.52879,0.03286,-0.65158,0.13290,-0.53206,0.02431,-0.62197,-0.05707,-0.59573,-0.04608,-0.65697,g ``` * [下載](https://archive.ics.uci.edu/ml/machine-learning-databases/ionosphere/ionosphere.data) * [更多信息](https://archive.ics.uci.edu/ml/datasets/Ionosphere) * [熱門成績](http://www.is.umk.pl/projects/datasets.html#Ionosphere) ## 9.小麥種子數據集 小麥種子數據集涉及通過測量來自不同品種小麥的種子來預測物種。 這是一個二元(2級)分類問題。每個班級的觀察數量是平衡的。有210個觀測值,包含7個輸入變量和1個輸出變量。變量名稱如下: 1. 區域。 2. 周長。 3. 緊湊 4. 內核的長度。 5. 內核寬度。 6. 不對稱系數。 7. 核仁溝的長度。 8. 等級(1,2,3)。 預測最普遍類別的基線表現是大約28%的分類準確度。 下面列出了前5行的樣本。 ```py 15.26,14.84,0.871,5.763,3.312,2.221,5.22,1 14.88,14.57,0.8811,5.554,3.333,1.018,4.956,1 14.29,14.09,0.905,5.291,3.337,2.699,4.825,1 13.84,13.94,0.8955,5.324,3.379,2.259,4.805,1 16.14,14.99,0.9034,5.658,3.562,1.355,5.175,1 ``` * [下載](http://archive.ics.uci.edu/ml/machine-learning-databases/00236/seeds_dataset.txt) * [更多信息](http://archive.ics.uci.edu/ml/datasets/seeds) ## 10.波士頓房價數據集 鑒于房屋及其附近的細節,波士頓房屋價格數據集涉及以數千美元預測房價。 這是一個回歸問題。每個班級的觀察數量是平衡的。共有506個觀測值,包含13個輸入變量和1個輸出變量。變量名稱如下: 1. CRIM:城鎮人均犯罪率。 2. ZN:占地面積超過25,000平方英尺的住宅用地比例。 3. INDUS:每個城鎮的非復雜商業面積比例。 4. CHAS:Charles River虛擬變量(如果管道限制河流則= 1;否則為0)。 5. NOX:一氧化氮濃度(每千萬份)。 6. RM:每棟住宅的平均房間數。 7. 年齡:1940年以前建造的自住單位比例。 8. DIS:到波士頓五個就業中心的加權距離。 9. RAD:徑向高速公路的可達性指數。 10. 稅:每10,000美元的全額物業稅率。 11. PTRATIO:城鎮的師生比例。 12. B:1000(Bk-0.63)^ 2其中Bk是城鎮黑人的比例。 13. LSTAT:人口狀況下降%。 14. MEDV:自住房的中位數價值1000美元。 預測平均值的基準表現是大約9.21千美元的RMSE。 下面列出了前5行的樣本。 ```py 0.00632 18.00 2.310 0 0.5380 6.5750 65.20 4.0900 1 296.0 15.30 396.90 4.98 24.00 0.02731 0.00 7.070 0 0.4690 6.4210 78.90 4.9671 2 242.0 17.80 396.90 9.14 21.60 0.02729 0.00 7.070 0 0.4690 7.1850 61.10 4.9671 2 242.0 17.80 392.83 4.03 34.70 0.03237 0.00 2.180 0 0.4580 6.9980 45.80 6.0622 3 222.0 18.70 394.63 2.94 33.40 0.06905 0.00 2.180 0 0.4580 7.1470 54.20 6.0622 3 222.0 18.70 396.90 5.33 36.20 ``` * [下載](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data)(更新:[從這里下載](https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.data)) * [更多信息](https://archive.ics.uci.edu/ml/datasets/Housing) ## 摘要 在這篇文章中,您發現了10個可用于練習應用機器學習的頂級標準數據集。 這是您的下一步: 1. 選擇一個數據集。 2. 抓住你最喜歡的工具(如Weka,scikit-learn或R) 3. 看看你能打多少標準分數。 4. 在下面的評論中報告您的結果。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看