<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # 多元回歸 現在您已經學習了如何使用 TensorFlow 創建基本回歸模型,讓我們嘗試在不同域的示例數據集上運行它。我們作為示例數據集生成的數據集是單變量的,即目標僅依賴于一個特征。 實際上,大多數數據集都是多變量的。為了強調一點,目標取決于多個變量或特征,因此回歸模型稱為**多元回歸**或**多維回歸**。 我們首先從最受歡迎的波士頓數據集開始。該數據集包含波士頓 506 所房屋的 13 個屬性,例如每個住所的平均房間數,一氧化氮濃度,到波士頓五個就業中心的加權距離等等。目標是自住房屋的中位數值。讓我們深入探索這個數據集的回歸模型。 從`sklearn`庫加載數據集并查看其描述: ```py boston=skds.load_boston() print(boston.DESCR) X=boston.data.astype(np.float32) y=boston.target.astype(np.float32) if (y.ndim == 1): y = y.reshape(len(y),1) X = skpp.StandardScaler().fit_transform(X) ``` 我們還提取`X`,一個特征矩陣,和`y`,一個前面代碼中的目標向量。我們重塑`y`使其成為二維的,并將`x`中的特征縮放為平均值為零,標準差為 1。現在讓我們使用這個`X`和`y`來訓練回歸模型,就像我們在前面的例子中所做的那樣: You may observe that the code for this example is similar to the code in the previous section on simple regression; however, we are using multiple features to train the model so it is called multi-regression. ```py X_train, X_test, y_train, y_test = skms.train_test_split(X, y, test_size=.4, random_state=123) num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name="x") y_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_outputs], name="y") w = tf.Variable(tf.zeros([num_inputs,num_outputs]), dtype=tf.float32, name="w") b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name="b") model = tf.matmul(x_tensor, w) + b loss = tf.reduce_mean(tf.square(model - y_tensor)) # mse and R2 functions mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs],dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs],dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs],dtype=np.float32) mse_score = 0 rs_score = 0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val, _ = tfs.run([loss, optimizer], feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score, rs_score = tfs.run([mse, rs], feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) ``` 我們從模型中獲得以下輸出: ```py For test data : MSE = 30.48501778, R2 = 0.64172244 ``` 讓我們繪制 MSE 和 R 平方值。 下圖顯示了 MSE 的繪圖: ![](https://img.kancloud.cn/40/a0/40a098bb17fe2c795037394307a9957e_838x496.png) 下圖顯示了 R 平方值的繪圖: ![](https://img.kancloud.cn/9d/e3/9de35ddbe3d24df2c5787ed0e6dad5fa_834x496.png) 正如我們在單變量數據集中看到的那樣,我們看到了 MSE 和 r 平方的類似模式。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看