<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                # 套索正則化 我們將 lasso 參數定義為值 0.8: ```py lasso_param = tf.Variable(0.8, dtype=tf.float32) lasso_loss = tf.reduce_mean(tf.abs(w)) * lasso_param ``` 將套索參數設置為零意味著沒有正則化,因為該項變為零。正則化項的值越高,懲罰越高。以下是套索正則化回歸的完整代碼,用于訓練模型以預測波士頓房屋定價: 下面的代碼假定訓練和測試數據集已按照前面的示例進行拆分。 ```py num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name='x') y_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_outputs], name='y') w = tf.Variable(tf.zeros([num_inputs, num_outputs]), dtype=tf.float32, name='w') b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name='b') model = tf.matmul(x_tensor, w) + b lasso_param = tf.Variable(0.8, dtype=tf.float32) lasso_loss = tf.reduce_mean(tf.abs(w)) * lasso_param loss = tf.reduce_mean(tf.square(model - y_tensor)) + lasso_loss learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs],dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs],dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs],dtype=np.float32) mse_score = 0.0 rs_score = 0.0 num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs], dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_score = 0.0 rs_score = 0.0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val,_ = tfs.run([loss,optimizer], feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score,rs_score = tfs.run([mse,rs], feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) ``` 我們得到以下輸出: ```py For test data : MSE = 30.48978233, R2 = 0.64166653 ``` 讓我們使用以下代碼繪制 MSE 和 r 平方的值: ```py plt.figure(figsize=(14,8)) plt.axis([0,num_epochs,0,np.max([loss_epochs,mse_epochs])]) plt.plot(loss_epochs, label='Loss on X_train') plt.plot(mse_epochs, label='MSE on X_test') plt.title('Loss in Iterations') plt.xlabel('# Epoch') plt.ylabel('Loss or MSE') plt.legend() plt.show() plt.figure(figsize=(14,8)) plt.axis([0,num_epochs,np.min(rs_epochs),np.max(rs_epochs)]) plt.title('R-squared in Iterations') plt.plot(rs_epochs, label='R2 on X_test') plt.xlabel('# Epoch') plt.ylabel('R2') plt.legend() plt.show() ``` 我們得到以下損失繪圖: ![](https://img.kancloud.cn/70/cd/70cd412873d284dc940ca6129445a616_838x496.png) 迭代中 R 平方的圖如下: ![](https://img.kancloud.cn/0c/86/0c8639bcec88c620470fdfe46c0d25e0_834x496.png) 讓我們用嶺回歸重復相同的例子。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看