<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # 如何開發多步空氣污染時間序列預測的自回歸預測模型 > 原文: [https://machinelearningmastery.com/how-to-develop-autoregressive-forecasting-models-for-multi-step-air-pollution-time-series-forecasting/](https://machinelearningmastery.com/how-to-develop-autoregressive-forecasting-models-for-multi-step-air-pollution-time-series-forecasting/) 實時世界時間序列預測具有挑戰性,其原因不僅限于問題特征,例如具有多個輸入變量,需要預測多個時間步驟,以及需要對多個物理站點執行相同類型的預測。 EMC Data Science Global Hackathon 數據集或簡稱“空氣質量預測”數據集描述了多個站點的天氣狀況,需要預測隨后三天的空氣質量測量結果。 在深入研究時間序列預測的復雜機器學習和深度學習方法之前,重要的是要找到經典方法的局限性,例如使用 AR 或 ARIMA 方法開發自回歸模型。 在本教程中,您將了解如何為多變量空氣污染時間序列開發多步時間序列預測的自回歸模型。 完成本教程后,您將了解: * 如何分析和計算時間序列數據的缺失值。 * 如何開發和評估多步時間序列預測的自回歸模型。 * 如何使用備用數據插補方法改進自回歸模型。 讓我們開始吧。 ![Impact of Dataset Size on Deep Learning Model Skill And Performance Estimates](https://img.kancloud.cn/5f/40/5f4085505d615d32103aea399e02c1ca_640x425.jpg) 數據集大小對深度學習模型技能和表現估計的影響 照片由 [Eneas De Troya](https://www.flickr.com/photos/eneas/13632855754/) ,保留一些權利。 ## 教程概述 本教程分為六個部分;他們是: 1. 問題描述 2. 模型評估 3. 數據分析 4. 開發自回歸模型 5. 具有全球歸因策略的自回歸模型 ## 問題描述 空氣質量預測數據集描述了多個地點的天氣狀況,需要預測隨后三天的空氣質量測量結果。 具體而言,對于多個站點,每小時提供 8 天的溫度,壓力,風速和風向等天氣觀測。目標是預測未來 3 天在多個地點的空氣質量測量。預測的提前期不是連續的;相反,必須在 72 小時預測期內預測特定提前期。他們是: ```py +1, +2, +3, +4, +5, +10, +17, +24, +48, +72 ``` 此外,數據集被劃分為不相交但連續的數據塊,其中 8 天的數據隨后是需要預測的 3 天。 并非所有站點或塊都可以獲得所有觀察結果,并且并非所有站點和塊都可以使用所有輸出變量。必須解決大部分缺失數據。 該數據集被用作 2012 年 Kaggle 網站上[短期機器學習競賽](https://www.kaggle.com/c/dsg-hackathon)(或黑客馬拉松)的基礎。 根據從參與者中扣留的真實觀察結果評估競賽的提交,并使用平均絕對誤差(MAE)進行評分。提交要求在由于缺少數據而無法預測的情況下指定-1,000,000 的值。實際上,提供了一個插入缺失值的模板,并且要求所有提交都采用(模糊的是什么)。 獲勝者在滯留測試集([私人排行榜](https://www.kaggle.com/c/dsg-hackathon/leaderboard))上使用隨機森林在滯后觀察中獲得了 0.21058 的 MAE。該帖子中提供了此解決方案的說明: * [把所有東西都扔進隨機森林:Ben Hamner 贏得空氣質量預測黑客馬拉松](http://blog.kaggle.com/2012/05/01/chucking-everything-into-a-random-forest-ben-hamner-on-winning-the-air-quality-prediction-hackathon/),2012。 在本教程中,我們將探索如何為可用作基線的問題開發樸素預測,以確定模型是否具有該問題的技能。 ## 模型評估 在我們評估樸素的預測方法之前,我們必須開發一個測試工具。 這至少包括如何準備數據以及如何評估預測。 ### 加載數據集 第一步是下載數據集并將其加載到內存中。 數據集可以從 Kaggle 網站免費下載。您可能必須創建一個帳戶并登錄才能下載數據集。 下載整個數據集,例如“_ 將所有 _”下載到您的工作站,并使用名為' _AirQualityPrediction_ '的文件夾解壓縮當前工作目錄中的存檔。 * [EMC 數據科學全球黑客馬拉松(空氣質量預測)數據](https://www.kaggle.com/c/dsg-hackathon/data) 我們的重點將是包含訓練數據集的' _TrainingData.csv_ '文件,特別是塊中的數據,其中每個塊是八個連續的觀察日和目標變量。 我們可以使用 Pandas [read_csv()函數](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)將數據文件加載到內存中,并在第 0 行指定標題行。 ```py # load dataset dataset = read_csv('AirQualityPrediction/TrainingData.csv', header=0) ``` 我們可以通過'chunkID'變量(列索引 1)對數據進行分組。 首先,讓我們獲取唯一的塊標識符列表。 ```py chunk_ids = unique(values[:, 1]) ``` 然后,我們可以收集每個塊標識符的所有行,并將它們存儲在字典中以便于訪問。 ```py chunks = dict() # sort rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks[chunk_id] = values[selection, :] ``` 下面定義了一個名為 _to_chunks()_ 的函數,它接受加載數據的 NumPy 數組,并將 _chunk_id_ 的字典返回到塊的行。 ```py # split the dataset by 'chunkID', return a dict of id to rows def to_chunks(values, chunk_ix=1): chunks = dict() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks[chunk_id] = values[selection, :] return chunks ``` 下面列出了加載數據集并將其拆分為塊的完整示例。 ```py # load data and split into chunks from numpy import unique from pandas import read_csv # split the dataset by 'chunkID', return a dict of id to rows def to_chunks(values, chunk_ix=1): chunks = dict() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks[chunk_id] = values[selection, :] return chunks # load dataset dataset = read_csv('AirQualityPrediction/TrainingData.csv', header=0) # group data by chunks values = dataset.values chunks = to_chunks(values) print('Total Chunks: %d' % len(chunks)) ``` 運行該示例將打印數據集中的塊數。 ```py Total Chunks: 208 ``` ### 數據準備 既然我們知道如何加載數據并將其拆分成塊,我們就可以將它們分成訓練和測試數據集。 盡管每個塊內的實際觀測數量可能差異很大,但每個塊的每小時觀察間隔為 8 天。 我們可以將每個塊分成前五天的訓練觀察和最后三天的測試。 每個觀察都有一行稱為' _position_within_chunk_ ',從 1 到 192(8 天* 24 小時)不等。因此,我們可以將此列中值小于或等于 120(5 * 24)的所有行作為訓練數據,將任何大于 120 的值作為測試數據。 此外,任何在訓練或測試分割中沒有任何觀察的塊都可以被丟棄,因為不可行。 在使用樸素模型時,我們只對目標變量感興趣,而不對輸入的氣象變量感興趣。因此,我們可以刪除輸入數據,并使訓練和測試數據僅包含每個塊的 39 個目標變量,以及塊和觀察時間內的位置。 下面的 _split_train_test()_ 函數實現了這種行為;給定一個塊的字典,它將每個分成訓練和測試塊數據。 ```py # split each chunk into train/test sets def split_train_test(chunks, row_in_chunk_ix=2): train, test = list(), list() # first 5 days of hourly observations for train cut_point = 5 * 24 # enumerate chunks for k,rows in chunks.items(): # split chunk rows by 'position_within_chunk' train_rows = rows[rows[:,row_in_chunk_ix] <= cut_point, :] test_rows = rows[rows[:,row_in_chunk_ix] > cut_point, :] if len(train_rows) == 0 or len(test_rows) == 0: print('>dropping chunk=%d: train=%s, test=%s' % (k, train_rows.shape, test_rows.shape)) continue # store with chunk id, position in chunk, hour and all targets indices = [1,2,5] + [x for x in range(56,train_rows.shape[1])] train.append(train_rows[:, indices]) test.append(test_rows[:, indices]) return train, test ``` 我們不需要整個測試數據集;相反,我們只需要在三天時間內的特定提前期進行觀察,特別是提前期: ```py +1, +2, +3, +4, +5, +10, +17, +24, +48, +72 ``` 其中,每個提前期相對于訓練期結束。 首先,我們可以將這些提前期放入函數中以便于參考: ```py # return a list of relative forecast lead times def get_lead_times(): return [1, 2 ,3, 4, 5, 10, 17, 24, 48, 72] ``` 接下來,我們可以將測試數據集縮減為僅在首選提前期的數據。 我們可以通過查看' _position_within_chunk_ '列并使用提前期作為距離訓練數據集末尾的偏移量來實現,例如: 120 + 1,120 +2 等 如果我們在測試集中找到匹配的行,則保存它,否則生成一行 NaN 觀測值。 下面的函數 _to_forecasts()_ 實現了這一點,并為每個塊的每個預測提前期返回一行 NumPy 數組。 ```py # convert the rows in a test chunk to forecasts def to_forecasts(test_chunks, row_in_chunk_ix=1): # get lead times lead_times = get_lead_times() # first 5 days of hourly observations for train cut_point = 5 * 24 forecasts = list() # enumerate each chunk for rows in test_chunks: chunk_id = rows[0, 0] # enumerate each lead time for tau in lead_times: # determine the row in chunk we want for the lead time offset = cut_point + tau # retrieve data for the lead time using row number in chunk row_for_tau = rows[rows[:,row_in_chunk_ix]==offset, :] # check if we have data if len(row_for_tau) == 0: # create a mock row [chunk, position, hour] + [nan...] row = [chunk_id, offset, nan] + [nan for _ in range(39)] forecasts.append(row) else: # store the forecast row forecasts.append(row_for_tau[0]) return array(forecasts) ``` 我們可以將所有這些組合在一起并將數據集拆分為訓練集和測試集,并將結果保存到新文件中。 完整的代碼示例如下所示。 ```py # split data into train and test sets from numpy import unique from numpy import nan from numpy import array from numpy import savetxt from pandas import read_csv # split the dataset by 'chunkID', return a dict of id to rows def to_chunks(values, chunk_ix=1): chunks = dict() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks[chunk_id] = values[selection, :] return chunks # split each chunk into train/test sets def split_train_test(chunks, row_in_chunk_ix=2): train, test = list(), list() # first 5 days of hourly observations for train cut_point = 5 * 24 # enumerate chunks for k,rows in chunks.items(): # split chunk rows by 'position_within_chunk' train_rows = rows[rows[:,row_in_chunk_ix] <= cut_point, :] test_rows = rows[rows[:,row_in_chunk_ix] > cut_point, :] if len(train_rows) == 0 or len(test_rows) == 0: print('>dropping chunk=%d: train=%s, test=%s' % (k, train_rows.shape, test_rows.shape)) continue # store with chunk id, position in chunk, hour and all targets indices = [1,2,5] + [x for x in range(56,train_rows.shape[1])] train.append(train_rows[:, indices]) test.append(test_rows[:, indices]) return train, test # return a list of relative forecast lead times def get_lead_times(): return [1, 2 ,3, 4, 5, 10, 17, 24, 48, 72] # convert the rows in a test chunk to forecasts def to_forecasts(test_chunks, row_in_chunk_ix=1): # get lead times lead_times = get_lead_times() # first 5 days of hourly observations for train cut_point = 5 * 24 forecasts = list() # enumerate each chunk for rows in test_chunks: chunk_id = rows[0, 0] # enumerate each lead time for tau in lead_times: # determine the row in chunk we want for the lead time offset = cut_point + tau # retrieve data for the lead time using row number in chunk row_for_tau = rows[rows[:,row_in_chunk_ix]==offset, :] # check if we have data if len(row_for_tau) == 0: # create a mock row [chunk, position, hour] + [nan...] row = [chunk_id, offset, nan] + [nan for _ in range(39)] forecasts.append(row) else: # store the forecast row forecasts.append(row_for_tau[0]) return array(forecasts) # load dataset dataset = read_csv('AirQualityPrediction/TrainingData.csv', header=0) # group data by chunks values = dataset.values chunks = to_chunks(values) # split into train/test train, test = split_train_test(chunks) # flatten training chunks to rows train_rows = array([row for rows in train for row in rows]) # print(train_rows.shape) print('Train Rows: %s' % str(train_rows.shape)) # reduce train to forecast lead times only test_rows = to_forecasts(test) print('Test Rows: %s' % str(test_rows.shape)) # save datasets savetxt('AirQualityPrediction/naive_train.csv', train_rows, delimiter=',') savetxt('AirQualityPrediction/naive_test.csv', test_rows, delimiter=',') ``` 運行該示例首先評論了從數據集中移除了塊 69 以獲得不足的數據。 然后我們可以看到每個訓練和測試集中有 42 列,一個用于塊 ID,塊內位置,一天中的小時和 39 個訓練變量。 我們還可以看到測試數據集的顯著縮小版本,其中行僅在預測前置時間。 新的訓練和測試數據集分別保存在' _naive_train.csv_ '和' _naive_test.csv_ '文件中。 ```py >dropping chunk=69: train=(0, 95), test=(28, 95) Train Rows: (23514, 42) Test Rows: (2070, 42) ``` ### 預測評估 一旦做出預測,就需要對它們進行評估。 在評估預測時,使用更簡單的格式會很有幫助。例如,我們將使用 _[chunk] [變量] [時間]_ 的三維結構,其中變量是從 0 到 38 的目標變量數,time 是從 0 到 9 的提前期索引。 模型有望以這種格式進行預測。 我們還可以重新構建測試數據集以使此數據集進行比較。下面的 _prepare_test_forecasts()_ 函數實現了這一點。 ```py # convert the test dataset in chunks to [chunk][variable][time] format def prepare_test_forecasts(test_chunks): predictions = list() # enumerate chunks to forecast for rows in test_chunks: # enumerate targets for chunk chunk_predictions = list() for j in range(3, rows.shape[1]): yhat = rows[:, j] chunk_predictions.append(yhat) chunk_predictions = array(chunk_predictions) predictions.append(chunk_predictions) return array(predictions) ``` 我們將使用平均絕對誤差或 MAE 來評估模型。這是在競爭中使用的度量,并且在給定目標變量的非高斯分布的情況下是合理的選擇。 如果提前期不包含測試集中的數據(例如 _NaN_ ),則不會計算該預測的錯誤。如果提前期確實在測試集中有數據但預測中沒有數據,那么觀察的全部大小將被視為錯誤。最后,如果測試集具有觀察值并進行預測,則絕對差值將被記錄為誤差。 _calculate_error()_ 函數實現這些規則并返回給定預測的錯誤。 ```py # calculate the error between an actual and predicted value def calculate_error(actual, predicted): # give the full actual value if predicted is nan if isnan(predicted): return abs(actual) # calculate abs difference return abs(actual - predicted) ``` 錯誤在所有塊和所有提前期之間求和,然后取平均值。 將計算總體 MAE,但我們還將計算每個預測提前期的 MAE。這通常有助于模型選擇,因為某些模型在不同的提前期可能會有不同的表現。 下面的 evaluate_forecasts()函數實現了這一點,計算了 _[chunk] [variable] [time]_ 格式中提供的預測和期望值的 MAE 和每個引導時間 MAE。 ```py # evaluate a forecast in the format [chunk][variable][time] def evaluate_forecasts(predictions, testset): lead_times = get_lead_times() total_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))] total_c, times_c = 0, [0 for _ in range(len(lead_times))] # enumerate test chunks for i in range(len(test_chunks)): # convert to forecasts actual = testset[i] predicted = predictions[i] # enumerate target variables for j in range(predicted.shape[0]): # enumerate lead times for k in range(len(lead_times)): # skip if actual in nan if isnan(actual[j, k]): continue # calculate error error = calculate_error(actual[j, k], predicted[j, k]) # update statistics total_mae += error times_mae[k] += error total_c += 1 times_c[k] += 1 # normalize summed absolute errors total_mae /= total_c times_mae = [times_mae[i]/times_c[i] for i in range(len(times_mae))] return total_mae, times_mae ``` 一旦我們對模型進行評估,我們就可以呈現它。 下面的 _summarize_error()_ 函數首先打印模型表現的一行摘要,然后創建每個預測提前期的 MAE 圖。 ```py # summarize scores def summarize_error(name, total_mae, times_mae): # print summary lead_times = get_lead_times() formatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))] s_scores = ', '.join(formatted) print('%s: [%.3f MAE] %s' % (name, total_mae, s_scores)) # plot summary pyplot.plot([str(x) for x in lead_times], times_mae, marker='.') pyplot.show() ``` 我們現在準備開始探索樸素預測方法的表現。 ## 數據分析 將經典時間序列模型擬合到這些數據的第一步是仔細研究數據。 有 208 個(實際上是 207 個)可用的數據塊,每個塊有 39 個時間序列適合;這是總共 8,073 個需要適合數據的獨立模型。這是很多模型,但模型是在相對少量的數據上進行訓練,最多(5 * 24)或 120 次觀測,并且模型是線性的,因此它可以快速找到擬合。 我們可以選擇如何為數據配置模型;例如: * 所有時間序列的一種模型配置(最簡單)。 * 跨塊的所有變量的一個模型配置(合理)。 * 每個塊的每個變量一個模型配置(最復雜)。 我們將研究所有系列的一種模型配置的最簡單方法,但您可能想要探索一種或多種其他方法。 本節分為三個部分;他們是: 1. 缺失數據 2. 歸咎于缺失數據 3. 自相關圖 ### 缺失數據 經典時間序列方法要求時間序列完整,例如,沒有遺漏的價值。 因此,第一步是研究目標變量的完整性或不完整性。 對于給定變量,可能缺少由缺失行定義的觀察值。具體地,每個觀察具有' _position_within_chunk_ '。我們期望訓練數據集中的每個塊有 120 個觀察值,其中“ _positions_within_chunk_ ”從 1 到 120 包含。 因此,我們可以為每個變量創建一個 120 納米值的數組,使用' _positions_within_chunk_ '值標記塊中的所有觀察值,剩下的任何內容都將標記為 _NaN_ 。然后我們可以繪制每個變量并尋找差距。 下面的 _variable_to_series()_ 函數將獲取目標變量的塊和給定列索引的行,并將為變量返回一系列 120 個時間步長,所有可用數據都標記為來自塊。 ```py # layout a variable with breaks in the data for missing positions def variable_to_series(chunk_train, col_ix, n_steps=5*24): # lay out whole series data = [nan for _ in range(n_steps)] # mark all available data for i in range(len(chunk_train)): # get position in chunk position = int(chunk_train[i, 1] - 1) # store data data[position] = chunk_train[i, col_ix] return data ``` 然后我們可以在一個塊中為每個目標變量調用此函數并創建一個線圖。 下面名為 _plot_variables()_ 的函數將實現此功能并創建一個圖形,其中 39 個線圖水平堆疊。 ```py # plot variables horizontally with gaps for missing data def plot_variables(chunk_train, n_vars=39): pyplot.figure() for i in range(n_vars): # convert target number into column number col_ix = 3 + i # mark missing obs for variable series = variable_to_series(chunk_train, col_ix) # plot ax = pyplot.subplot(n_vars, 1, i+1) ax.set_xticklabels([]) ax.set_yticklabels([]) pyplot.plot(series) # show plot pyplot.show() ``` 將這些結合在一起,下面列出了完整的示例。創建第一個塊中所有變量的圖。 ```py # plot missing from numpy import loadtxt from numpy import nan from numpy import unique from matplotlib import pyplot # split the dataset by 'chunkID', return a list of chunks def to_chunks(values, chunk_ix=0): chunks = list() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks.append(values[selection, :]) return chunks # layout a variable with breaks in the data for missing positions def variable_to_series(chunk_train, col_ix, n_steps=5*24): # lay out whole series data = [nan for _ in range(n_steps)] # mark all available data for i in range(len(chunk_train)): # get position in chunk position = int(chunk_train[i, 1] - 1) # store data data[position] = chunk_train[i, col_ix] return data # plot variables horizontally with gaps for missing data def plot_variables(chunk_train, n_vars=39): pyplot.figure() for i in range(n_vars): # convert target number into column number col_ix = 3 + i # mark missing obs for variable series = variable_to_series(chunk_train, col_ix) # plot ax = pyplot.subplot(n_vars, 1, i+1) ax.set_xticklabels([]) ax.set_yticklabels([]) pyplot.plot(series) # show plot pyplot.show() # load dataset train = loadtxt('AirQualityPrediction/naive_train.csv', delimiter=',') # group data by chunks train_chunks = to_chunks(train) # pick one chunk rows = train_chunks[0] # plot variables plot_variables(rows) ``` 運行該示例將創建一個包含 39 個線圖的圖形,一個用于第一個塊中的每個目標變量。 我們可以在許多變量中看到季節性結構。這表明在建模之前對每個系列執行 24 小時季節差異可能是有益的。 這些圖很小,您可能需要增加圖的大小以清楚地看到數據。 我們可以看到有些變量我們沒有數據。這些可以被檢測和忽略,因為我們無法建模或預測它們。 我們可以看到許多系列中的差距,但差距很短,最多只能持續幾個小時。這些可以通過在同一系列中的相同時間持續存在先前值或值來估算。 ![Line Plots for All Targets in Chunk 1 With Missing Values Marked](https://img.kancloud.cn/05/40/05407887de2425974b919e3c8f8c2dd6_1280x960.jpg) 具有缺失值的塊 1 中所有目標的線圖 隨機查看其他幾個塊,許多塊會產生具有大致相同觀察結果的圖。 但情況并非總是如此。 更新示例以繪制數據集中的第 4 個塊(索引 3)。 ```py # pick one chunk rows = train_chunks[3] ``` 結果是一個人物講述了一個非常不同的故事。 我們發現數據中存在持續數小時的差距,可能長達一天或更長時間。 這些系列在用于裝配經典型號之前需要進行大幅修復。 使用同一小時內系列內的持久性或觀察來輸入缺失的數據可能是不夠的。它們可能必須填充整個訓練數據集中的平均值。 ![Line Plots for All Targets in Chunk 4 With Missing Values Marked](https://img.kancloud.cn/c5/c7/c5c721d555cf16d3aa67793007285d18_1280x960.jpg) 具有缺失值的塊 4 中所有目標的線圖 ### 歸咎于缺失數據 有許多方法來估算缺失的數據,我們無法知道哪個是最好的先驗。 一種方法是使用多種不同的插補方法準備數據,并使用適合數據的模型技能來幫助指導最佳方法。 已經提出的一些估算方法包括: * 堅持系列中的最后一次觀察,也稱為線性插值。 * 使用相同的小時填寫系列中的值或平均值。 * 在訓練數據集中填寫一天中相同小時的值或平均值。 使用組合也可能是有用的,例如,從系列中保留或填充小間隙,并從整個數據集中提取大間隙。 我們還可以通過填寫缺失數據并查看圖表以查看該系列是否合理來研究輸入方法的效果。它原始,有效,快速。 首先,我們需要為每個塊計算一個小時的并行序列,我們可以使用它來為塊中的每個變量計算特定于小時的數據。 給定一系列部分填充的小時,下面的 _interpolate_hours()_ 函數將填充一天中缺少的小時數。它通過找到第一個標記的小時,然后向前計數,填寫一天中的小時,然后向后執行相同的操作來完成此操作。 ```py # interpolate series of hours (in place) in 24 hour time def interpolate_hours(hours): # find the first hour ix = -1 for i in range(len(hours)): if not isnan(hours[i]): ix = i break # fill-forward hour = hours[ix] for i in range(ix+1, len(hours)): # increment hour hour += 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # fill-backward hour = hours[ix] for i in range(ix-1, -1, -1): # decrement hour hour -= 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 ``` 我確信有更多的 Pythonic 方式來編寫這個函數,但是我想把它全部用來使它顯而易見。 我們可以在缺少數據的模擬小時列表上測試它。下面列出了完整的示例。 ```py # interpolate hours from numpy import nan from numpy import isnan # interpolate series of hours (in place) in 24 hour time def interpolate_hours(hours): # find the first hour ix = -1 for i in range(len(hours)): if not isnan(hours[i]): ix = i break # fill-forward hour = hours[ix] for i in range(ix+1, len(hours)): # increment hour hour += 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # fill-backward hour = hours[ix] for i in range(ix-1, -1, -1): # decrement hour hour -= 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # define hours with missing data data = [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0, nan, 2, nan, nan, nan, nan, nan, nan, 9, 10, 11, 12, 13, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] print(data) # fill in missing hours interpolate_hours(data) print(data) ``` 首先運行示例打印帶有缺失值的小時數據,然后正確填寫所有小時數的相同序列。 ```py [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0, nan, 2, nan, nan, nan, nan, nan, nan, 9, 10, 11, 12, 13, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 0, 1] ``` 我們可以使用此函數為一個塊準備一系列小時,這些塊可用于使用特定于小時的信息填充塊的缺失值。 我們可以從上一節中調用相同的 _variable_to_series()_ 函數來創建具有缺失值的小時系列(列索引 2),然后調用 _interpolate_hours()_ 來填補空白。 ```py # prepare sequence of hours for the chunk hours = variable_to_series(rows, 2) # interpolate hours interpolate_hours(hours) ``` 然后我們可以將時間傳遞給可以使用它的任何 impute 函數。 讓我們嘗試在相同系列中使用相同小時填充值中的缺失值。具體來說,我們將在系列中找到所有具有相同小時的行并計算中值。 下面的 _impute_missing()_ 獲取塊中的所有行,準備好的塊的一天中的小時數,以及具有變量的缺失值和變量的列索引的系列。 它首先檢查系列是否全部缺失數據,如果是這種情況則立即返回,因為不能執行任何插補。然后,它會在系列的時間步驟中進行枚舉,當它檢測到沒有數據的時間步長時,它會收集序列中所有行,并使用相同小時的數據并計算中值。 ```py # impute missing data def impute_missing(rows, hours, series, col_ix): # count missing observations n_missing = count_nonzero(isnan(series)) # calculate ratio of missing ratio = n_missing / float(len(series)) * 100 # check for no data if ratio == 100.0: return series # impute missing using the median value for hour in the series imputed = list() for i in range(len(series)): if isnan(series[i]): # get all rows with the same hour matches = rows[rows[:,2]==hours[i]] # fill with median value value = nanmedian(matches[:, col_ix]) imputed.append(value) else: imputed.append(series[i]) return imputed ``` 要查看此推算策略的影響,我們可以更新上一節中的 _plot_variables()_ 函數,首先繪制插補系列,然后繪制具有缺失值的原始系列。 這將允許插補值在原始系列的間隙中閃耀,我們可以看到結果是否合理。 _plot_variables()_ 函數的更新版本在下面列出了此更改,調用 _impute_missing()_ 函數來創建系列的推算版本并將小時系列作為參數。 ```py # plot variables horizontally with gaps for missing data def plot_variables(chunk_train, hours, n_vars=39): pyplot.figure() for i in range(n_vars): # convert target number into column number col_ix = 3 + i # mark missing obs for variable series = variable_to_series(chunk_train, col_ix) ax = pyplot.subplot(n_vars, 1, i+1) ax.set_xticklabels([]) ax.set_yticklabels([]) # imputed imputed = impute_missing(chunk_train, hours, series, col_ix) # plot imputed pyplot.plot(imputed) # plot with missing pyplot.plot(series) # show plot pyplot.show() ``` 將所有這些結合在一起,下面列出了完整的示例。 ```py # impute missing from numpy import loadtxt from numpy import nan from numpy import isnan from numpy import count_nonzero from numpy import unique from numpy import nanmedian from matplotlib import pyplot # split the dataset by 'chunkID', return a list of chunks def to_chunks(values, chunk_ix=0): chunks = list() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks.append(values[selection, :]) return chunks # impute missing data def impute_missing(rows, hours, series, col_ix): # count missing observations n_missing = count_nonzero(isnan(series)) # calculate ratio of missing ratio = n_missing / float(len(series)) * 100 # check for no data if ratio == 100.0: return series # impute missing using the median value for hour in the series imputed = list() for i in range(len(series)): if isnan(series[i]): # get all rows with the same hour matches = rows[rows[:,2]==hours[i]] # fill with median value value = nanmedian(matches[:, col_ix]) imputed.append(value) else: imputed.append(series[i]) return imputed # interpolate series of hours (in place) in 24 hour time def interpolate_hours(hours): # find the first hour ix = -1 for i in range(len(hours)): if not isnan(hours[i]): ix = i break # fill-forward hour = hours[ix] for i in range(ix+1, len(hours)): # increment hour hour += 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # fill-backward hour = hours[ix] for i in range(ix-1, -1, -1): # decrement hour hour -= 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # layout a variable with breaks in the data for missing positions def variable_to_series(chunk_train, col_ix, n_steps=5*24): # lay out whole series data = [nan for _ in range(n_steps)] # mark all available data for i in range(len(chunk_train)): # get position in chunk position = int(chunk_train[i, 1] - 1) # store data data[position] = chunk_train[i, col_ix] return data # plot variables horizontally with gaps for missing data def plot_variables(chunk_train, hours, n_vars=39): pyplot.figure() for i in range(n_vars): # convert target number into column number col_ix = 3 + i # mark missing obs for variable series = variable_to_series(chunk_train, col_ix) ax = pyplot.subplot(n_vars, 1, i+1) ax.set_xticklabels([]) ax.set_yticklabels([]) # imputed imputed = impute_missing(chunk_train, hours, series, col_ix) # plot imputed pyplot.plot(imputed) # plot with missing pyplot.plot(series) # show plot pyplot.show() # load dataset train = loadtxt('AirQualityPrediction/naive_train.csv', delimiter=',') # group data by chunks train_chunks = to_chunks(train) # pick one chunk rows = train_chunks[0] # prepare sequence of hours for the chunk hours = variable_to_series(rows, 2) # interpolate hours interpolate_hours(hours) # plot variables plot_variables(rows, hours) ``` 運行該示例將創建一個包含 39 個線圖的單個圖形:一個用于訓練數據集中第一個塊中的每個目標變量。 我們可以看到該系列是橙色的,顯示原始數據,并且已經估算了間隙并標記為藍色。 藍色部分看似合理。 ![Line Plots for All Targets in Chunk 1 With Imputed Missing Values](https://img.kancloud.cn/c9/55/c955f04853bc8a51d8e1d5dc1bd67b82_1280x960.jpg) 帶有插補缺失值的塊 1 中所有目標的線圖 我們可以在數據集中具有更多缺失數據的第 4 個塊上嘗試相同的方法。 ```py # pick one chunk rows = train_chunks[0] ``` 運行該示例會創建相同類型的圖形,但在這里我們可以看到填充了估算值的大缺失段。 同樣,序列看似合理,甚至在適當的時候顯示每日季節周期結構。 ![Line Plots for All Targets in Chunk 4 With Imputed Missing Values](https://img.kancloud.cn/71/70/717037a08e68fa54536a197d43af5f25_1280x960.jpg) 帶有插補缺失值的塊 4 中所有目標的線圖 這似乎是一個好的開始;您可以探索其他估算策略,并了解它們如何在線圖或最終模型技能方面進行比較。 ### 自相關圖 現在我們知道如何填寫缺失值,我們可以看一下系列數據的自相關圖。 自相關圖總結了每個觀察結果與先前時間步驟的觀察結果之間的關系。與部分自相關圖一起,它們可用于確定 ARMA 模型的配置。 statsmodels 庫提供 [plot_acf()](http://www.statsmodels.org/dev/generated/statsmodels.graphics.tsaplots.plot_acf.html)和 [plot_pacf()](http://www.statsmodels.org/dev/generated/statsmodels.graphics.tsaplots.plot_pacf.html)函數,可分別用于繪制 ACF 和 PACF 圖。 我們可以更新 _plot_variables()_ 來創建這些圖,這些圖是 39 系列中每一個的每種類型之一。這是很多情節。 我們將垂直向左堆疊所有 ACF 圖,并在右側垂直堆疊所有 PACF 圖。這是兩列 39 個圖。我們將繪圖所考慮的滯后時間限制為 24 個時間步長(小時),并忽略每個變量與其自身的相關性,因為它是多余的。 下面列出了用于繪制 ACF 和 PACF 圖的更新的 _plot_variables()_ 函數。 ```py # plot acf and pacf plots for each imputed variable series def plot_variables(chunk_train, hours, n_vars=39): pyplot.figure() n_plots = n_vars * 2 j = 0 lags = 24 for i in range(1, n_plots, 2): # convert target number into column number col_ix = 3 + j j += 1 # get series series = variable_to_series(chunk_train, col_ix) imputed = impute_missing(chunk_train, hours, series, col_ix) # acf axis = pyplot.subplot(n_vars, 2, i) plot_acf(imputed, ax=axis, lags=lags, zero=False) axis.set_title('') axis.set_xticklabels([]) axis.set_yticklabels([]) # pacf axis = pyplot.subplot(n_vars, 2, i+1) plot_pacf(imputed, ax=axis, lags=lags, zero=False) axis.set_title('') axis.set_xticklabels([]) axis.set_yticklabels([]) # show plot pyplot.show() ``` 下面列出了完整的示例。 ```py # acf and pacf plots from numpy import loadtxt from numpy import nan from numpy import isnan from numpy import count_nonzero from numpy import unique from numpy import nanmedian from matplotlib import pyplot from statsmodels.graphics.tsaplots import plot_acf from statsmodels.graphics.tsaplots import plot_pacf # split the dataset by 'chunkID', return a list of chunks def to_chunks(values, chunk_ix=0): chunks = list() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks.append(values[selection, :]) return chunks # impute missing data def impute_missing(rows, hours, series, col_ix): # count missing observations n_missing = count_nonzero(isnan(series)) # calculate ratio of missing ratio = n_missing / float(len(series)) * 100 # check for no data if ratio == 100.0: return series # impute missing using the median value for hour in the series imputed = list() for i in range(len(series)): if isnan(series[i]): # get all rows with the same hour matches = rows[rows[:,2]==hours[i]] # fill with median value value = nanmedian(matches[:, col_ix]) imputed.append(value) else: imputed.append(series[i]) return imputed # interpolate series of hours (in place) in 24 hour time def interpolate_hours(hours): # find the first hour ix = -1 for i in range(len(hours)): if not isnan(hours[i]): ix = i break # fill-forward hour = hours[ix] for i in range(ix+1, len(hours)): # increment hour hour += 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # fill-backward hour = hours[ix] for i in range(ix-1, -1, -1): # decrement hour hour -= 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # layout a variable with breaks in the data for missing positions def variable_to_series(chunk_train, col_ix, n_steps=5*24): # lay out whole series data = [nan for _ in range(n_steps)] # mark all available data for i in range(len(chunk_train)): # get position in chunk position = int(chunk_train[i, 1] - 1) # store data data[position] = chunk_train[i, col_ix] return data # plot acf and pacf plots for each imputed variable series def plot_variables(chunk_train, hours, n_vars=39): pyplot.figure() n_plots = n_vars * 2 j = 0 lags = 24 for i in range(1, n_plots, 2): # convert target number into column number col_ix = 3 + j j += 1 # get series series = variable_to_series(chunk_train, col_ix) imputed = impute_missing(chunk_train, hours, series, col_ix) # acf axis = pyplot.subplot(n_vars, 2, i) plot_acf(imputed, ax=axis, lags=lags, zero=False) axis.set_title('') axis.set_xticklabels([]) axis.set_yticklabels([]) # pacf axis = pyplot.subplot(n_vars, 2, i+1) plot_pacf(imputed, ax=axis, lags=lags, zero=False) axis.set_title('') axis.set_xticklabels([]) axis.set_yticklabels([]) # show plot pyplot.show() # load dataset train = loadtxt('AirQualityPrediction/naive_train.csv', delimiter=',') # group data by chunks train_chunks = to_chunks(train) # pick one chunk rows = train_chunks[0] # prepare sequence of hours for the chunk hours = variable_to_series(rows, 2) # interpolate hours interpolate_hours(hours) # plot variables plot_variables(rows, hours) ``` 運行該示例會在訓練數據集的第一個塊中為目標變量創建一個包含大量圖的圖形。 您可能需要增加繪圖窗口的大小以更好地查看每個繪圖的詳細信息。 我們可以在左側看到,大多數 ACF 圖在滯后 1-2 步時顯示出顯著的相關性(高于顯著性區域的點),在某些情況下可能滯后 1-3 步,在滯后時緩慢,穩定地減少 同樣,在右邊,我們可以看到 PACF 圖中 1-2 個時間步長的顯著滯后,陡峭的下降。 這有力地暗示了自相關過程,其順序可能是 1,2 或 3,例如 AR(3)。 在左側的 ACF 圖中,我們還可以看到相關性中的每日周期。這可能表明在建模之前對數據進行季節性差異或使用能夠進行季節性差異的 AR 模型的一些益處。 ![ACF and PACF Plots for Target Variables in Chunk 1](https://img.kancloud.cn/be/8b/be8b59181abc71b8186070d88f7088a2_1280x2018.jpg) 塊 1 中目標變量的 ACF 和 PACF 圖 我們可以重復對其他塊的目標變量的分析,我們看到的圖片大致相同。 它表明我們可以通過所有塊的所有系列的一般 AR 模型配置逃脫。 ## 開發自回歸模型 在本節中,我們將為估算的目標序列數據開發一個自回歸模型。 第一步是實現一個通用函數,用于為每個塊進行預測。 該功能為訓練數據集和測試集的輸入列(塊 ID,塊和小時的位置)執行任務,并返回具有 _[chunk] [變量] [時間]的預期 3D 格式的所有塊的預測 _。 該函數枚舉預測中的塊,然后枚舉 39 個目標列,調用另一個名為 _forecast_variable()_ 的新函數,以便對給定目標變量的每個提前期進行預測。 完整的功能如下所列。 ```py # forecast for each chunk, returns [chunk][variable][time] def forecast_chunks(train_chunks, test_input): lead_times = get_lead_times() predictions = list() # enumerate chunks to forecast for i in range(len(train_chunks)): # prepare sequence of hours for the chunk hours = variable_to_series(train_chunks[i], 2) # interpolate hours interpolate_hours(hours) # enumerate targets for chunk chunk_predictions = list() for j in range(39): yhat = forecast_variable(hours, train_chunks[i], test_input[i], lead_times, j) chunk_predictions.append(yhat) chunk_predictions = array(chunk_predictions) predictions.append(chunk_predictions) return array(predictions) ``` 我們現在可以實現 _forecast_variable()_ 的一個版本。 對于每個變量,我們首先檢查是否沒有數據(例如所有 NaN),如果是,我們返回每個預測提前期的 NaN 預測。 然后我們使用 _variable_to_series()_ 從變量創建一個系列,然后通過調用 _impute_missing()_ 使用系列中的中位數來計算缺失值,兩者都是在上一節。 最后,我們調用一個名為 _fit_and_forecast()_ 的新函數,該函數適合模型并預測 10 個預測前置時間。 ```py # forecast all lead times for one variable def forecast_variable(hours, chunk_train, chunk_test, lead_times, target_ix): # convert target number into column number col_ix = 3 + target_ix # check for no data if not has_data(chunk_train[:, col_ix]): forecast = [nan for _ in range(len(lead_times))] return forecast # get series series = variable_to_series(chunk_train, col_ix) # impute imputed = impute_missing(chunk_train, hours, series, col_ix) # fit AR model and forecast forecast = fit_and_forecast(imputed) return forecast ``` 我們將 AR 模型適用于給定的推算系列。為此,我們將使用 [statsmodels ARIMA 類](http://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARIMA.html)。如果您想探索任何 ARIMA 模型系列,我們將使用 ARIMA 代替 AR 來提供一些靈活性。 首先,我們必須定義模型,包括自回歸過程的順序,例如 AR(1)。 ```py # define the model model = ARIMA(series, order=(1,0,0)) ``` 接下來,該模型適用于推算系列。我們通過將 _disp_ 設置為 _False_ 來關閉擬合期間的詳細信息。 ```py # fit the model model_fit = model.fit(disp=False) ``` 然后使用擬合模型預測系列結束后的 72 小時。 ```py # forecast 72 hours yhat = model_fit.predict(len(series), len(series)+72) ``` 我們只對特定的提前期感興趣,因此我們準備一系列提前期,減 1 以將它們轉換為數組索引,然后使用它們選擇我們感興趣的 10 個預測提前期的值。 ```py # extract lead times lead_times = array(get_lead_times()) indices = lead_times - 1 return yhat[indices] ``` statsmodels ARIMA 模型使用線性代數庫來擬合封面下的模型,有時適合過程在某些數據上可能不穩定。因此,它可以拋出異常或報告大量警告。 我們將捕獲異常并返回 _NaN_ 預測,并在擬合和評估期間忽略所有警告。 下面的 _fit_and_forecast()_ 函數將所有這些聯系在一起。 ```py # fit AR model and generate a forecast def fit_and_forecast(series): # define the model model = ARIMA(series, order=(1,0,0)) # return a nan forecast in case of exception try: # ignore statsmodels warnings with catch_warnings(): filterwarnings("ignore") # fit the model model_fit = model.fit(disp=False) # forecast 72 hours yhat = model_fit.predict(len(series), len(series)+72) # extract lead times lead_times = array(get_lead_times()) indices = lead_times - 1 return yhat[indices] except: return [nan for _ in range(len(get_lead_times()))] ``` 我們現在準備評估 207 個訓練塊中每個系列中 39 個系列中每個系列的自回歸過程。 我們將從測試 AR(1)過程開始。 完整的代碼示例如下所示。 ```py # autoregression forecast from numpy import loadtxt from numpy import nan from numpy import isnan from numpy import count_nonzero from numpy import unique from numpy import array from numpy import nanmedian from statsmodels.tsa.arima_model import ARIMA from matplotlib import pyplot from warnings import catch_warnings from warnings import filterwarnings # split the dataset by 'chunkID', return a list of chunks def to_chunks(values, chunk_ix=0): chunks = list() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks.append(values[selection, :]) return chunks # return a list of relative forecast lead times def get_lead_times(): return [1, 2, 3, 4, 5, 10, 17, 24, 48, 72] # interpolate series of hours (in place) in 24 hour time def interpolate_hours(hours): # find the first hour ix = -1 for i in range(len(hours)): if not isnan(hours[i]): ix = i break # fill-forward hour = hours[ix] for i in range(ix+1, len(hours)): # increment hour hour += 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # fill-backward hour = hours[ix] for i in range(ix-1, -1, -1): # decrement hour hour -= 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # return true if the array has any non-nan values def has_data(data): return count_nonzero(isnan(data)) < len(data) # impute missing data def impute_missing(rows, hours, series, col_ix): # impute missing using the median value for hour in the series imputed = list() for i in range(len(series)): if isnan(series[i]): # get all rows with the same hour matches = rows[rows[:,2]==hours[i]] # fill with median value value = nanmedian(matches[:, col_ix]) if isnan(value): value = 0.0 imputed.append(value) else: imputed.append(series[i]) return imputed # layout a variable with breaks in the data for missing positions def variable_to_series(chunk_train, col_ix, n_steps=5*24): # lay out whole series data = [nan for _ in range(n_steps)] # mark all available data for i in range(len(chunk_train)): # get position in chunk position = int(chunk_train[i, 1] - 1) # store data data[position] = chunk_train[i, col_ix] return data # fit AR model and generate a forecast def fit_and_forecast(series): # define the model model = ARIMA(series, order=(1,0,0)) # return a nan forecast in case of exception try: # ignore statsmodels warnings with catch_warnings(): filterwarnings("ignore") # fit the model model_fit = model.fit(disp=False) # forecast 72 hours yhat = model_fit.predict(len(series), len(series)+72) # extract lead times lead_times = array(get_lead_times()) indices = lead_times - 1 return yhat[indices] except: return [nan for _ in range(len(get_lead_times()))] # forecast all lead times for one variable def forecast_variable(hours, chunk_train, chunk_test, lead_times, target_ix): # convert target number into column number col_ix = 3 + target_ix # check for no data if not has_data(chunk_train[:, col_ix]): forecast = [nan for _ in range(len(lead_times))] return forecast # get series series = variable_to_series(chunk_train, col_ix) # impute imputed = impute_missing(chunk_train, hours, series, col_ix) # fit AR model and forecast forecast = fit_and_forecast(imputed) return forecast # forecast for each chunk, returns [chunk][variable][time] def forecast_chunks(train_chunks, test_input): lead_times = get_lead_times() predictions = list() # enumerate chunks to forecast for i in range(len(train_chunks)): # prepare sequence of hours for the chunk hours = variable_to_series(train_chunks[i], 2) # interpolate hours interpolate_hours(hours) # enumerate targets for chunk chunk_predictions = list() for j in range(39): yhat = forecast_variable(hours, train_chunks[i], test_input[i], lead_times, j) chunk_predictions.append(yhat) chunk_predictions = array(chunk_predictions) predictions.append(chunk_predictions) return array(predictions) # convert the test dataset in chunks to [chunk][variable][time] format def prepare_test_forecasts(test_chunks): predictions = list() # enumerate chunks to forecast for rows in test_chunks: # enumerate targets for chunk chunk_predictions = list() for j in range(3, rows.shape[1]): yhat = rows[:, j] chunk_predictions.append(yhat) chunk_predictions = array(chunk_predictions) predictions.append(chunk_predictions) return array(predictions) # calculate the error between an actual and predicted value def calculate_error(actual, predicted): # give the full actual value if predicted is nan if isnan(predicted): return abs(actual) # calculate abs difference return abs(actual - predicted) # evaluate a forecast in the format [chunk][variable][time] def evaluate_forecasts(predictions, testset): lead_times = get_lead_times() total_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))] total_c, times_c = 0, [0 for _ in range(len(lead_times))] # enumerate test chunks for i in range(len(test_chunks)): # convert to forecasts actual = testset[i] predicted = predictions[i] # enumerate target variables for j in range(predicted.shape[0]): # enumerate lead times for k in range(len(lead_times)): # skip if actual in nan if isnan(actual[j, k]): continue # calculate error error = calculate_error(actual[j, k], predicted[j, k]) # update statistics total_mae += error times_mae[k] += error total_c += 1 times_c[k] += 1 # normalize summed absolute errors total_mae /= total_c times_mae = [times_mae[i]/times_c[i] for i in range(len(times_mae))] return total_mae, times_mae # summarize scores def summarize_error(name, total_mae, times_mae): # print summary lead_times = get_lead_times() formatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))] s_scores = ', '.join(formatted) print('%s: [%.3f MAE] %s' % (name, total_mae, s_scores)) # plot summary pyplot.plot([str(x) for x in lead_times], times_mae, marker='.') pyplot.show() # load dataset train = loadtxt('AirQualityPrediction/naive_train.csv', delimiter=',') test = loadtxt('AirQualityPrediction/naive_test.csv', delimiter=',') # group data by chunks train_chunks = to_chunks(train) test_chunks = to_chunks(test) # forecast test_input = [rows[:, :3] for rows in test_chunks] forecast = forecast_chunks(train_chunks, test_input) # evaluate forecast actual = prepare_test_forecasts(test_chunks) total_mae, times_mae = evaluate_forecasts(forecast, actual) # summarize forecast summarize_error('AR', total_mae, times_mae) ``` 首先運行示例報告測試集的總體 MAE,然后是每個預測提前期的 MAE。 我們可以看到該模型實現了約 0.492 的 MAE,小于通過樸素持久模型實現的 MAE 0.520。這表明該方法確實具有一定的技巧。 ```py AR: [0.492 MAE] +1 0.225, +2 0.342, +3 0.410, +4 0.475, +5 0.512, +10 0.593, +17 0.586, +24 0.588, +48 0.588, +72 0.604 ``` 創建每個預測提前期的 MAE 線圖,顯示預測誤差隨預測提前期的增加而線性增加。 ![MAE vs Forecast Lead Time for AR(1)](https://img.kancloud.cn/d8/64/d86450b7405cb679010c66b857a56d7a_1280x960.jpg) AR 與預測 AR 的預測時間(1) 我們可以更改代碼以測試其他 AR 模型。具體是 _fit_and_forecast()_ 函數中 ARIMA 模型的順序。 AR(2)模型可以定義為: ```py model = ARIMA(series, order=(2,0,0)) ``` 使用此更新運行代碼顯示錯誤進一步下降到總體 MAE 約 0.490。 ```py AR: [0.490 MAE] +1 0.229, +2 0.342, +3 0.412, +4 0.470, +5 0.503, +10 0.563, +17 0.576, +24 0.605, +48 0.597, +72 0.608 ``` 我們也可以嘗試 AR(3): ```py model = ARIMA(series, order=(3,0,0)) ``` 使用更新重新運行示例顯示與 AR(2)相比整體 MAE 增加。 AR(2)可能是一個很好的全局級配置,盡管預計為每個變量或每個系列量身定制的模型可能總體上表現更好。 ```py AR: [0.491 MAE] +1 0.232, +2 0.345, +3 0.412, +4 0.472, +5 0.504, +10 0.556, +17 0.575, +24 0.607, +48 0.599, +72 0.611 ``` ## 具有全球歸因策略的自回歸模型 我們可以使用替代插補策略來評估 AR(2)模型。 我們可以計算所有塊中變量的相同值,而不是計算塊中系列中相同小時的中值。 我們可以更新 _impute_missing()_ 以將所有訓練塊作為參數,然后從給定小時的所有塊收集行,以計算用于估算的中值。下面列出了該功能的更新版本。 ```py # impute missing data def impute_missing(train_chunks, rows, hours, series, col_ix): # impute missing using the median value for hour in all series imputed = list() for i in range(len(series)): if isnan(series[i]): # collect all rows across all chunks for the hour all_rows = list() for rows in train_chunks: [all_rows.append(row) for row in rows[rows[:,2]==hours[i]]] # calculate the central tendency for target all_rows = array(all_rows) # fill with median value value = nanmedian(all_rows[:, col_ix]) if isnan(value): value = 0.0 imputed.append(value) else: imputed.append(series[i]) return imputed ``` 為了將 train_chunks 傳遞給 _impute_missing()_ 函數,我們必須更新 _forecast_variable()_ 函數以將 _train_chunks_ 作為參數并傳遞給它,然后更新 _forecast_chunks()_ 函數以傳遞 _train_chunks_ 。 下面列出了使用全局插補策略的完整示例。 ```py # autoregression forecast with global impute strategy from numpy import loadtxt from numpy import nan from numpy import isnan from numpy import count_nonzero from numpy import unique from numpy import array from numpy import nanmedian from statsmodels.tsa.arima_model import ARIMA from matplotlib import pyplot from warnings import catch_warnings from warnings import filterwarnings # split the dataset by 'chunkID', return a list of chunks def to_chunks(values, chunk_ix=0): chunks = list() # get the unique chunk ids chunk_ids = unique(values[:, chunk_ix]) # group rows by chunk id for chunk_id in chunk_ids: selection = values[:, chunk_ix] == chunk_id chunks.append(values[selection, :]) return chunks # return a list of relative forecast lead times def get_lead_times(): return [1, 2, 3, 4, 5, 10, 17, 24, 48, 72] # interpolate series of hours (in place) in 24 hour time def interpolate_hours(hours): # find the first hour ix = -1 for i in range(len(hours)): if not isnan(hours[i]): ix = i break # fill-forward hour = hours[ix] for i in range(ix+1, len(hours)): # increment hour hour += 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # fill-backward hour = hours[ix] for i in range(ix-1, -1, -1): # decrement hour hour -= 1 # check for a fill if isnan(hours[i]): hours[i] = hour % 24 # return true if the array has any non-nan values def has_data(data): return count_nonzero(isnan(data)) < len(data) # impute missing data def impute_missing(train_chunks, rows, hours, series, col_ix): # impute missing using the median value for hour in all series imputed = list() for i in range(len(series)): if isnan(series[i]): # collect all rows across all chunks for the hour all_rows = list() for rows in train_chunks: [all_rows.append(row) for row in rows[rows[:,2]==hours[i]]] # calculate the central tendency for target all_rows = array(all_rows) # fill with median value value = nanmedian(all_rows[:, col_ix]) if isnan(value): value = 0.0 imputed.append(value) else: imputed.append(series[i]) return imputed # layout a variable with breaks in the data for missing positions def variable_to_series(chunk_train, col_ix, n_steps=5*24): # lay out whole series data = [nan for _ in range(n_steps)] # mark all available data for i in range(len(chunk_train)): # get position in chunk position = int(chunk_train[i, 1] - 1) # store data data[position] = chunk_train[i, col_ix] return data # fit AR model and generate a forecast def fit_and_forecast(series): # define the model model = ARIMA(series, order=(2,0,0)) # return a nan forecast in case of exception try: # ignore statsmodels warnings with catch_warnings(): filterwarnings("ignore") # fit the model model_fit = model.fit(disp=False) # forecast 72 hours yhat = model_fit.predict(len(series), len(series)+72) # extract lead times lead_times = array(get_lead_times()) indices = lead_times - 1 return yhat[indices] except: return [nan for _ in range(len(get_lead_times()))] # forecast all lead times for one variable def forecast_variable(hours, train_chunks, chunk_train, chunk_test, lead_times, target_ix): # convert target number into column number col_ix = 3 + target_ix # check for no data if not has_data(chunk_train[:, col_ix]): forecast = [nan for _ in range(len(lead_times))] return forecast # get series series = variable_to_series(chunk_train, col_ix) # impute imputed = impute_missing(train_chunks, chunk_train, hours, series, col_ix) # fit AR model and forecast forecast = fit_and_forecast(imputed) return forecast # forecast for each chunk, returns [chunk][variable][time] def forecast_chunks(train_chunks, test_input): lead_times = get_lead_times() predictions = list() # enumerate chunks to forecast for i in range(len(train_chunks)): # prepare sequence of hours for the chunk hours = variable_to_series(train_chunks[i], 2) # interpolate hours interpolate_hours(hours) # enumerate targets for chunk chunk_predictions = list() for j in range(39): yhat = forecast_variable(hours, train_chunks, train_chunks[i], test_input[i], lead_times, j) chunk_predictions.append(yhat) chunk_predictions = array(chunk_predictions) predictions.append(chunk_predictions) return array(predictions) # convert the test dataset in chunks to [chunk][variable][time] format def prepare_test_forecasts(test_chunks): predictions = list() # enumerate chunks to forecast for rows in test_chunks: # enumerate targets for chunk chunk_predictions = list() for j in range(3, rows.shape[1]): yhat = rows[:, j] chunk_predictions.append(yhat) chunk_predictions = array(chunk_predictions) predictions.append(chunk_predictions) return array(predictions) # calculate the error between an actual and predicted value def calculate_error(actual, predicted): # give the full actual value if predicted is nan if isnan(predicted): return abs(actual) # calculate abs difference return abs(actual - predicted) # evaluate a forecast in the format [chunk][variable][time] def evaluate_forecasts(predictions, testset): lead_times = get_lead_times() total_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))] total_c, times_c = 0, [0 for _ in range(len(lead_times))] # enumerate test chunks for i in range(len(test_chunks)): # convert to forecasts actual = testset[i] predicted = predictions[i] # enumerate target variables for j in range(predicted.shape[0]): # enumerate lead times for k in range(len(lead_times)): # skip if actual in nan if isnan(actual[j, k]): continue # calculate error error = calculate_error(actual[j, k], predicted[j, k]) # update statistics total_mae += error times_mae[k] += error total_c += 1 times_c[k] += 1 # normalize summed absolute errors total_mae /= total_c times_mae = [times_mae[i]/times_c[i] for i in range(len(times_mae))] return total_mae, times_mae # summarize scores def summarize_error(name, total_mae, times_mae): # print summary lead_times = get_lead_times() formatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))] s_scores = ', '.join(formatted) print('%s: [%.3f MAE] %s' % (name, total_mae, s_scores)) # plot summary pyplot.plot([str(x) for x in lead_times], times_mae, marker='.') pyplot.show() # load dataset train = loadtxt('AirQualityPrediction/naive_train.csv', delimiter=',') test = loadtxt('AirQualityPrediction/naive_test.csv', delimiter=',') # group data by chunks train_chunks = to_chunks(train) test_chunks = to_chunks(test) # forecast test_input = [rows[:, :3] for rows in test_chunks] forecast = forecast_chunks(train_chunks, test_input) # evaluate forecast actual = prepare_test_forecasts(test_chunks) total_mae, times_mae = evaluate_forecasts(forecast, actual) # summarize forecast summarize_error('AR', total_mae, times_mae) ``` 運行該示例顯示整體 MAE 進一步下降至約 0.487。 探索插補策略可能會很有趣,這種策略可以根據系列中缺少的數據量或填充的間隙來替換用于填充缺失值的方法。 ```py AR: [0.487 MAE] +1 0.228, +2 0.339, +3 0.409, +4 0.469, +5 0.499, +10 0.560, +17 0.573, +24 0.600, +48 0.595, +72 0.606 ``` 還創建了 MAE 與預測提前期的線圖。 ![MAE vs Forecast Lead Time for AR(2) Impute With Global Strategy](https://img.kancloud.cn/7c/2a/7c2a4e0ce5d6a5e9a0eea517654d2bf5_1280x960.jpg) MAE 與 AR 的預測提前期(2)對全球戰略的影響 ## 擴展 本節列出了一些擴展您可能希望探索的教程的想法。 * **估算策略**。為每個系列中缺失的數據開發并評估另一種替代插補策略。 * **數據準備**。探索應用于每種技術的數據準備技術是否可以提高模型技能,例如標準化,標準化和功率變換。 * **差異**。探索差分(例如 1 步或 24 步(季節差分))是否可以使每個系列靜止,從而產生更好的預測。 如果你探索任何這些擴展,我很想知道。 ## 進一步閱讀 如果您希望深入了解,本節將提供有關該主題的更多資源。 ### 帖子 * [標準多變量,多步驟和多站點時間序列預測問題](https://machinelearningmastery.com/standard-multivariate-multi-step-multi-site-time-series-forecasting-problem/) * [自相關和部分自相關的溫和介紹](https://machinelearningmastery.com/gentle-introduction-autocorrelation-partial-autocorrelation/) * [如何使用 Python 網格搜索 ARIMA 模型超參數](https://machinelearningmastery.com/grid-search-arima-hyperparameters-with-python/) ### API * [statsmodels.graphics.tsaplots.plot_acf API](http://www.statsmodels.org/dev/generated/statsmodels.graphics.tsaplots.plot_acf.html) * [statsmodels.graphics.tsaplots.plot_pacf API](http://www.statsmodels.org/dev/generated/statsmodels.graphics.tsaplots.plot_pacf.html) * [statsmodels.tsa.arima_model.ARIMA API](http://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARIMA.html) ### 用品 * [EMC 數據科學全球黑客馬拉松(空氣質量預測)](https://www.kaggle.com/c/dsg-hackathon/data) * [將所有東西放入隨機森林:Ben Hamner 贏得空氣質量預測黑客馬拉松](http://blog.kaggle.com/2012/05/01/chucking-everything-into-a-random-forest-ben-hamner-on-winning-the-air-quality-prediction-hackathon/) * [EMC 數據科學全球黑客馬拉松(空氣質量預測)的獲獎代碼](https://github.com/benhamner/Air-Quality-Prediction-Hackathon-Winning-Model) * [分區模型的一般方法?](https://www.kaggle.com/c/dsg-hackathon/discussion/1821) ## 摘要 在本教程中,您了解了如何為多變量空氣污染時間序列開發多步時間序列預測的自回歸模型。 具體來說,你學到了: * 如何分析和計算時間序列數據的缺失值。 * 如何開發和評估多步時間序列預測的自回歸模型。 * 如何使用備用數據插補方法改進自回歸模型。 你有任何問題嗎? 在下面的評論中提出您的問題,我會盡力回答。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看