<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                # 模型選擇:選擇估計量及其參數 校驗者: [@片刻](https://github.com/apachecn/scikit-learn-doc-zh) 翻譯者: [@森系](https://github.com/apachecn/scikit-learn-doc-zh) ## 分數和交叉驗證分數 如我們所見,每一個估計量都有一個可以在新數據上判定擬合質量(或預期值)的 `score` 方法。**越大越好**. ``` >>> from sklearn import datasets, svm >>> digits = datasets.load_digits() >>> X_digits = digits.data >>> y_digits = digits.target >>> svc = svm.SVC(C=1, kernel='linear') >>> svc.fit(X_digits[:-100], y_digits[:-100]).score(X_digits[-100:], y_digits[-100:]) 0.97999999999999998 ``` 為了更好地預測精度(我們可以用它作為模型的擬合優度代理),我們可以連續分解用于我們訓練和測試用的 *折疊數據*。 ``` >>> import numpy as np >>> X_folds = np.array_split(X_digits, 3) >>> y_folds = np.array_split(y_digits, 3) >>> scores = list() >>> for k in range(3): ... # 為了稍后的 ‘彈出’ 操作,我們使用 ‘列表’ 來復制數據 ... X_train = list(X_folds) ... X_test = X_train.pop(k) ... X_train = np.concatenate(X_train) ... y_train = list(y_folds) ... y_test = y_train.pop(k) ... y_train = np.concatenate(y_train) ... scores.append(svc.fit(X_train, y_train).score(X_test, y_test)) >>> print(scores) [0.93489148580968284, 0.95659432387312182, 0.93989983305509184] ``` 這被稱為 [`KFold`](../../modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold "sklearn.model_selection.KFold") 交叉驗證. ## 交叉驗證生成器 scikit-learn 有可以生成訓練/測試索引列表的類,可用于流行的交叉驗證策略。 類提供了 `split` 方法,方法允許輸入能被分解的數據集,并為每次選擇的交叉驗證策略迭代生成訓練/測試集索引。 下面是使用 `split` 方法的例子。 ``` >>> from sklearn.model_selection import KFold, cross_val_score >>> X = ["a", "a", "b", "c", "c", "c"] >>> k_fold = KFold(n_splits=3) >>> for train_indices, test_indices in k_fold.split(X): ... print('Train: %s | test: %s' % (train_indices, test_indices)) Train: [2 3 4 5] | test: [0 1] Train: [0 1 4 5] | test: [2 3] Train: [0 1 2 3] | test: [4 5] ``` 然后就可以很容易地執行交叉驗證了: ``` >>> [svc.fit(X_digits[train], y_digits[train]).score(X_digits[test], y_digits[test]) ... for train, test in k_fold.split(X_digits)] [0.93489148580968284, 0.95659432387312182, 0.93989983305509184] ``` 交叉驗證分數可以使用 [`cross_val_score`](../../modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score") 直接計算出來。給定一個估計量,交叉驗證對象,和輸入數據集, [`cross_val_score`](../../modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score") 函數就會反復分解出訓練和測試集的數據,然后使用訓練集和為每次迭代交叉驗證運算出的基于測試集的分數來訓練估計量。 默認情況下,估計器的 `score` 方法被用于運算個體分數。 可以參考 [metrics 模塊](../../modules/metrics.html#metrics) 學習更多可用的評分方法。 ``` >>> cross_val_score(svc, X_digits, y_digits, cv=k_fold, n_jobs=-1) array([ 0.93489149, 0.95659432, 0.93989983]) ``` n\_jobs=-1 意味著運算會被調度到所有 CPU 上進行。 或者,可以提供 `scoring` 參數來指定替換的評分方法。 > ``` > >>> cross_val_score(svc, X_digits, y_digits, cv=k_fold, > ... scoring='precision_macro') > array([ 0.93969761, 0.95911415, 0.94041254]) > > ``` > > > > > **交叉驗證生成器** [`KFold`](../../modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold "sklearn.model_selection.KFold") **(n\_splits, shuffle, random\_state)**[`StratifiedKFold`](../../modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") **(n\_splits, shuffle, random\_state)**[`GroupKFold`](../../modules/generated/sklearn.model_selection.GroupKFold.html#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold") **(n\_splits)**將其分解為 K 個折疊,在 K-1 上訓練,然后排除測試。和 K-Fold 一樣,但會保留每個折疊里的類分布。確保相同組不會在測試和訓練集里。[`ShuffleSplit`](../../modules/generated/sklearn.model_selection.ShuffleSplit.html#sklearn.model_selection.ShuffleSplit "sklearn.model_selection.ShuffleSplit") **(n\_splits, test\_size, train\_size, random\_state)**[`StratifiedShuffleSplit`](../../modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html#sklearn.model_selection.StratifiedShuffleSplit "sklearn.model_selection.StratifiedShuffleSplit")[`GroupShuffleSplit`](../../modules/generated/sklearn.model_selection.GroupShuffleSplit.html#sklearn.model_selection.GroupShuffleSplit "sklearn.model_selection.GroupShuffleSplit")生成基于隨機排列的訓練/測試索引。和 shuffle 分解一樣,但會保留每個迭代里的類分布。確保相同組不會在測試和訓練集里。[`LeaveOneGroupOut`](../../modules/generated/sklearn.model_selection.LeaveOneGroupOut.html#sklearn.model_selection.LeaveOneGroupOut "sklearn.model_selection.LeaveOneGroupOut") **()**[`LeavePGroupsOut`](../../modules/generated/sklearn.model_selection.LeavePGroupsOut.html#sklearn.model_selection.LeavePGroupsOut "sklearn.model_selection.LeavePGroupsOut") **(n\_groups)**[`LeaveOneOut`](../../modules/generated/sklearn.model_selection.LeaveOneOut.html#sklearn.model_selection.LeaveOneOut "sklearn.model_selection.LeaveOneOut") **()**使用數組分組來給觀察分組。忽略 P 組。忽略一個觀察。[`LeavePOut`](../../modules/generated/sklearn.model_selection.LeavePOut.html#sklearn.model_selection.LeavePOut "sklearn.model_selection.LeavePOut") **(p)**[`PredefinedSplit`](../../modules/generated/sklearn.model_selection.PredefinedSplit.html#sklearn.model_selection.PredefinedSplit "sklearn.model_selection.PredefinedSplit")忽略 P 觀察。生成基于預定義分解的訓練/測試索引。**練習** [![http://sklearn.apachecn.org/cn/0.19.0/_images/sphx_glr_plot_cv_digits_001.png](https://box.kancloud.cn/d88615d69bffb39c5644dcbf7dc372b4_400x300.jpg)](../../auto_examples/exercises/plot_cv_digits.html)在數字數據集中,用一個線性內核繪制一個 [`SVC`](../../modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC "sklearn.svm.SVC") 估計器的交叉驗證分數來作為 `C` 參數函數(使用從1到10的點對數網格). ``` import numpy as np from sklearn.model_selection import cross_val_score from sklearn import datasets, svm digits = datasets.load_digits() X = digits.data y = digits.target svc = svm.SVC(kernel='linear') C_s = np.logspace(-10, 0, 10) ``` **方法:** [Cross-validation on Digits Dataset Exercise](../../auto_examples/exercises/plot_cv_digits.html#sphx-glr-auto-examples-exercises-plot-cv-digits-py) ## 網格搜索和交叉驗證估計量 ### 網格搜索 scikit-learn 提供了一個對象,在給定數據情況下,在一個參數網格,估計器擬合期間計算分數,并選擇參數來最大化交叉驗證分數。這個對象在構建過程中獲取估計器并提供一個估計器 API。 ``` >>> from sklearn.model_selection import GridSearchCV, cross_val_score >>> Cs = np.logspace(-6, -1, 10) >>> clf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs), ... n_jobs=-1) >>> clf.fit(X_digits[:1000], y_digits[:1000]) GridSearchCV(cv=None,... >>> clf.best_score_ 0.925... >>> clf.best_estimator_.C 0.0077... ``` ``` >>> # Prediction performance on test set is not as good as on train set >>> clf.score(X_digits[1000:], y_digits[1000:]) 0.943... ``` 默認情況下, [`GridSearchCV`](../../modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") 使用一個三倍折疊交叉驗證。但是,如果它檢測到分類器被傳遞,而不是回歸,它就會使用分層的三倍。 嵌套交叉驗證 ``` >>> cross_val_score(clf, X_digits, y_digits) ... array([ 0.938..., 0.963..., 0.944...]) ``` 兩個交叉驗證循環并行執行:一個由 [`GridSearchCV`](../../modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV") 估計器設置 `gamma`,另一個 `cross_val_score` 則是測量估計器的預期執行情況。結果分數是對新數據上的預期分數的無偏估計。 Warning 你不可以并行運算嵌套對象(`n_jobs` 與1不同)。 ### 交叉驗證估計量 設置參數的交叉驗證可以更有效地完成一個基礎算法。這就是為什么對某些估計量來說,scikit-learn 提供了 交叉驗證 估計量自動設置它們的參數。 ``` >>> from sklearn import linear_model, datasets >>> lasso = linear_model.LassoCV() >>> diabetes = datasets.load_diabetes() >>> X_diabetes = diabetes.data >>> y_diabetes = diabetes.target >>> lasso.fit(X_diabetes, y_diabetes) LassoCV(alphas=None, copy_X=True, cv=None, eps=0.001, fit_intercept=True, max_iter=1000, n_alphas=100, n_jobs=1, normalize=False, positive=False, precompute='auto', random_state=None, selection='cyclic', tol=0.0001, verbose=False) >>> # 估計器自動選擇它的 lambda: >>> lasso.alpha_ 0.01229... ``` 這些估計量和它們的副本稱呼類似,在名字后加 ‘CV’。 **練習** 在糖尿病數據集中,找到最優正則化參數 α。 **另外:** 你有多相信 α 的選擇? ``` from sklearn import datasets from sklearn.linear_model import LassoCV from sklearn.linear_model import Lasso from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV diabetes = datasets.load_diabetes() ``` **方法:** [Cross-validation on diabetes Dataset Exercise](../../auto_examples/exercises/plot_cv_diabetes.html#sphx-glr-auto-examples-exercises-plot-cv-diabetes-py)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看