<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # 使用 CBOW 嵌入 在這個秘籍中,我們將實現 word2vec 的 CBOW(連續詞袋)方法。它與`Skip-Gram`方法非常相似,除了我們預測來自環境詞周圍窗口的單個目標詞。 ## 做好準備 在這個秘籍中,我們將實現 word2vec 的`CBOW`方法。它與`Skip-Gram`方法非常相似,只是我們預測來自環境詞周圍窗口的單個目標詞。 在前面的示例中,我們將窗口和目標的每個組合視為一組配對的輸入和輸出,但是使用 CBOW,我們將周圍的窗口嵌入添加到一起以獲得一個嵌入來預測目標字嵌入: ![](https://img.kancloud.cn/81/5d/815db8df1e42aa393a29fa17a6b20d70_868x663.png) 圖 5:如何在一個例子的窗口上創建 CBOW 嵌入數據的描述(每側窗口大小= 1) 大多數代碼都保持不變,除了我們需要改變我們創建嵌入的方式以及如何從句子生成數據。 為了使代碼更易于閱讀,我們已將所有主要函數移動到同一目錄中名為`text_helpers.py`的單獨文件中。此函數保存數據加載,文本正則化,字典創建和批量生成函數。除非另有說明,否則這些函數與使用 Skip-Gram Embeddings 秘籍中顯示的完全相同。 ## 操作步驟 我們將按如下方式處理秘籍: 1. 我們將首先加載必要的庫,包括前面提到的`text_helpers.py`腳本,我們將把我們的函數用于文本加載和操作。然后我們將開始一個圖會話: ```py import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import random import os import pickle import string import requests import collections import io import tarfile import urllib.request import text_helpers from nltk.corpus import stopwords sess = tf.Session() ``` 1. 我們要確保在開始保存之前存在臨時數據和參數保存文件夾。使用以下代碼檢查: ```py # Make a saving directory if it doesn't exist data_folder_name = 'temp' if not os.path.exists(data_folder_name): os.makedirs(data_folder_name) ``` 1. 然后我們將聲明模型的參數,這與我們在上一個秘籍中對`Skip-Gram`方法所做的類似: ```py # Declare model parameters batch_size = 500 embedding_size = 200 vocabulary_size = 2000 generations = 50000 model_learning_rate = 0.001 num_sampled = int(batch_size/2 window_size = 3 # Add checkpoints to training save_embeddings_every = 5000 print_valid_every = 5000 print_loss_every = 100 # Declare stop words stops = stopwords.words('english') # We pick some test words. We are expecting synonyms to appear valid_words = ['love', 'hate', 'happy', 'sad', 'man', 'woman'] ``` 1. 我們已將數據加載和文本正則化函數移動到我們在開始時導入的單獨文件中,此文件在 github 倉庫中都可用, [https://github.com/nfmcclure/tensorflow_cookbook/tree/master/ 07_Natural_Language_Processing / 05_Working_With_CBOW_Embeddings](https://github.com/nfmcclure/tensorflow_cookbook/tree/master/07_Natural_Language_Processing/05_Working_With_CBOW_Embeddings) 和 Packt 倉庫, [https://github.com/PacktPublishing/TensorFlow-Machine-Learning-Cookbook-Second-Edition](https://github.com/PacktPublishing/TensorFlow-Machine-Learning-Cookbook-Second-Edition) 。現在我們可以打電話給他們我們也只想要包含三個或更多單詞的評論。使用以下代碼: ```py texts, target = text_helpers.load_movie_data(data_folder_name) texts = text_helpers.normalize_text(texts, stops) # Texts must contain at least 3 words target = [target[ix] for ix, x in enumerate(texts) if len(x.split()) > 2] texts = [x for x in texts if len(x.split()) > 2] ``` 1. 現在我們將創建我們的詞匯詞典,這將幫助我們查找單詞。當我們想要打印出最接近我們驗證集的單詞時,我們還需要一個反向字典來查找索引中的單詞: ```py word_dictionary = text_helpers.build_dictionary(texts, vocabulary_size) word_dictionary_rev = dict(zip(word_dictionary.values(), word_dictionary.keys())) text_data = text_helpers.text_to_numbers(texts, word_dictionary) # Get validation word keys valid_examples = [word_dictionary[x] for x in valid_words] ``` 1. 接下來,我們將初始化我們想要擬合的單詞嵌入,并聲明模型數據占位符。使用以下代碼執行此操作: ```py embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) # Create data/target placeholders x_inputs = tf.placeholder(tf.int32, shape=[batch_size, 2*window_size]) y_target = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) ``` 1. 我們現在可以創建一種處理嵌入一詞的方法。由于 CBOW 模型添加了上下文窗口的嵌入,我們將創建一個循環并將所有嵌入添加到窗口中: ```py # Lookup the word embeddings and # Add together window embeddings: embed = tf.zeros([batch_size, embedding_size]) for element in range(2*window_size): embed += tf.nn.embedding_lookup(embeddings, x_inputs[:, element]) ``` 1. 我們將使用 TensorFlow 中內置的噪聲對比誤差損失函數,因為我們的分類輸出太稀疏,無法使 softmax 收斂,如下所示: ```py # NCE loss parameters nce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / np.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) # Declare loss function (NCE) loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, inputs=embed, labels=y_target, num_sampled=num_sampled, num_classes=vocabulary_size)) ``` 1. 就像我們在 Skip-Gram 秘籍中所做的那樣,我們將使用余弦相似性來打印離我們的驗證字數據集最近的單詞,以了解我們的嵌入如何工作。使用以下代碼執行此操作: ```py norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True) ``` 1. 要保存嵌入,我們必須加載 TensorFlow `train.Saver`方法。這個方法默認保存整個圖,但是我們可以給它一個參數來保存嵌入變量,我們也可以給它一個特定的名稱。在這里,我們給它的名稱與圖中的變量名稱相同: ```py saver = tf.train.Saver({"embeddings": embeddings}) ``` 1. 我們現在將聲明我們的優化函數并初始化我們的模型變量。使用以下代碼執行此操作: ```py optimizer = tf.train.GradientDescentOptimizer(learning_rate=model_learning_rate).minimize(loss) init = tf.global_variables_initializer() sess.run(init) ``` 1. 最后,我們可以遍歷我們的訓練步驟,打印出損失,并將我們指定的嵌入和字典保存到: ```py loss_vec = [] loss_x_vec = [] for i in range(generations): batch_inputs, batch_labels = text_helpers.generate_batch_data(text_data, batch_size, window_size, method='cbow') feed_dict = {x_inputs : batch_inputs, y_target : batch_labels} # Run the train step sess.run(optimizer, feed_dict=feed_dict) # Return the loss if (i+1) % print_loss_every == 0: loss_val = sess.run(loss, feed_dict=feed_dict) loss_vec.append(loss_val) loss_x_vec.append(i+1) print('Loss at step {} : {}'.format(i+1, loss_val)) # Validation: Print some random words and top 5 related words if (i+1) % print_valid_every == 0: sim = sess.run(similarity, feed_dict=feed_dict) for j in range(len(valid_words)): valid_word = word_dictionary_rev[valid_examples[j]] top_k = 5 # number of nearest neighbors nearest = (-sim[j, :]).argsort()[1:top_k+1] log_str = "Nearest to {}:".format(valid_word) for k in range(top_k): close_word = word_dictionary_rev[nearest[k]] print_str = '{} {},'.format(log_str, close_word) print(print_str) # Save dictionary + embeddings if (i+1) % save_embeddings_every == 0: # Save vocabulary dictionary with open(os.path.join(data_folder_name,'movie_vocab.pkl'), 'wb') as f: pickle.dump(word_dictionary, f) # Save embeddings model_checkpoint_path = os.path.join(os.getcwd(),data_folder_name,'cbow_movie_embeddings.ckpt') save_path = saver.save(sess, model_checkpoint_path) print('Model saved in file: {}'.format(save_path)) ``` 1. 這導致以下輸出: ```py Loss at step 100 : 62.04829025268555 Loss at step 200 : 33.182334899902344 ... Loss at step 49900 : 1.6794960498809814 Loss at step 50000 : 1.5071022510528564 Nearest to love: clarity, cult, cliched, literary, memory, Nearest to hate: bringing, gifted, almost, next, wish, Nearest to happy: ensemble, fall, courage, uneven, girls, Nearest to sad: santa, devoid, biopic, genuinely, becomes, Nearest to man: project, stands, none, soul, away, Nearest to woman: crush, even, x, team, ensemble, Model saved in file: .../temp/cbow_movie_embeddings.ckpt ``` 1. `text_helpers.py`文件中除了一個函數之外的所有函數都具有直接來自上一個秘籍的函數。我們將通過添加`cbow`方法對`generate_batch_data()`函數稍加補充,如下所示: ```py elif method=='cbow': batch_and_labels = [(x[:y] + x[(y+1):], x[y]) for x,y in zip(window_sequences, label_indices)] # Only keep windows with consistent 2*window_size batch_and_labels = [(x,y) for x,y in batch_and_labels if len(x)==2*window_size] batch, labels = [list(x) for x in zip(*batch_and_labels)] ``` ## 工作原理 此秘籍與使用 Skip-Gram 創建嵌入非常相似。主要區別在于我們如何生成數據并組合嵌入。 對于這個秘籍,我們加載數據,正則化文本,創建詞匯詞典,使用字典查找嵌入,組合嵌入,并訓練神經網絡來預測目標詞。 ## 更多 值得注意的是,`CBOW`方法訓練周圍窗口的累加嵌入以預測目標字。這樣做的一個結果是來自 word2vec 的`CBOW`方法具有`Skip-Gram`方法缺乏的平滑效果,并且認為這對于較小的文本數據集可能是優選的是合理的。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看