<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # keras 的 skip-gram 模型 使用 Keras 的嵌入模型的流程與 TensorFlow 保持一致。 * 在 Keras 函數式或順序模型中創建網絡體系結構 * 將目標和上下文單詞的真假對提供給網絡 * 查找目標和上下文單詞的單詞向量 * 執行單詞向量的點積以獲得相似性得分 * 將相似性得分通過 sigmoid 層以將輸出作為真或假對 現在讓我們使用 Keras 函數式 API 實現這些步驟: 1. 導入所需的庫: ```py from keras.models import Model from keras.layers.embeddings import Embedding from keras.preprocessing import sequence from keras.preprocessing.sequence import skipgrams from keras.layers import Input, Dense, Reshape, Dot, merge import keras ``` 重置圖,以便清除以前在 Jupyter Notebook 中運行的任何后續效果: ```py # reset the jupyter buffers tf.reset_default_graph() keras.backend.clear_session() ``` 1. 創建一個驗證集,我們將用它來打印我們的模型在訓練結束時找到的相似單詞: ```py valid_size = 8 x_valid = np.random.choice(valid_size * 10, valid_size, replace=False) print('valid: ',x_valid) ``` 1. 定義所需的超參數: ```py batch_size = 1024 embedding_size = 512 n_negative_samples = 64 ptb.skip_window=2 ``` 1. 使用`keras.preprocessing.sequence`中的`make_sampling_table()`函數創建一個大小等于詞匯長度的樣本表。接下來,使用`keras.preprocessing.sequence`中的函數`skipgrams()`生成上下文和目標詞對以及表示它們是真對還是假對的標簽。 ```py sample_table = sequence.make_sampling_table(ptb.vocab_len) pairs, labels= sequence.skipgrams(ptb.part['train'], ptb.vocab_len,window_size=ptb.skip_window, sampling_table=sample_table) ``` 1. 讓我們打印一些使用以下代碼生成的偽造和真實對: ```py print('The skip-gram pairs : target,context') for i in range(5 * ptb.skip_window): print(['{} {}'.format(id,ptb.id2word[id]) \ for id in pairs[i]],':',labels[i]) ``` 對配對如下: ```py The skip-gram pairs : target,context ['547 trying', '5 to'] : 1 ['4845 bargain', '2 <eos>'] : 1 ['1705 election', '198 during'] : 1 ['4704 flows', '8117 gun'] : 0 ['13 is', '37 company'] : 1 ['625 above', '132 three'] : 1 ['5768 pessimistic', '1934 immediate'] : 0 ['637 china', '2 <eos>'] : 1 ['258 five', '1345 pence'] : 1 ['1956 chrysler', '8928 exercises'] : 0 ``` 1. 從上面生成的對中拆分目標和上下文單詞,以便將它們輸入模型。將目標和上下文單詞轉換為二維數組。 ```py x,y=zip(*pairs) x=np.array(x,dtype=np.int32) x=dsu.to2d(x,unit_axis=1) y=np.array(y,dtype=np.int32) y=dsu.to2d(y,unit_axis=1) labels=np.array(labels,dtype=np.int32) labels=dsu.to2d(labels,unit_axis=1) ``` 1. 定義網絡的體系結構。正如我們所討論的,必須將目標和上下文單詞輸入網絡,并且需要從嵌入層中查找它們的向量。因此,首先我們分別為目標和上下文單詞定義輸入,嵌入和重塑層: ```py # build the target word model target_in = Input(shape=(1,),name='target_in') target = Embedding(ptb.vocab_len,embedding_size,input_length=1, name='target_em')(target_in) target = Reshape((embedding_size,1),name='target_re')(target) # build the context word model context_in = Input((1,),name='context_in') context = Embedding(ptb.vocab_len,embedding_size,input_length=1, name='context_em')(context_in) context = Reshape((embedding_size,1),name='context_re')(context) ``` 1. 接下來,構建這兩個模型的點積,將其輸入 sigmoid 層以生成輸出標簽: ```py # merge the models with the dot product to check for # similarity and add sigmoid layer output = Dot(axes=1,name='output_dot')([target,context]) output = Reshape((1,),name='output_re')(output) output = Dense(1, activation='sigmoid',name='output_sig')(output) ``` 1. 從我們剛剛創建的輸入和輸出模型構建函數式模型: ```py # create the functional model for finding word vectors model = Model(inputs=[target_in,context_in],outputs=output) model.compile(loss='binary_crossentropy', optimizer='adam') ``` 1. 此外,在給定輸入目標詞的情況下,構建一個模型,用于預測與所有單詞的相似性: ```py # merge the models and create model to check for cosine similarity similarity = Dot(axes=0,normalize=True, name='sim_dot')([target,context]) similarity_model = Model(inputs=[target_in,context_in], outputs=similarity) ``` 讓我們打印模型摘要: ```py __________________________________________________________________________ Layer (type) Output Shape Param # Connected to ========================================================================== target_in (InputLayer) (None, 1) 0 __________________________________________________________________________ context_in (InputLayer) (None, 1) 0 __________________________________________________________________________ target_em (Embedding) (None, 1, 512) 5120000 target_in[0][0] __________________________________________________________________________ context_em (Embedding) (None, 1, 512) 5120000 context_in[0][0] __________________________________________________________________________ target_re (Reshape) (None, 512, 1) 0 target_em[0][0] __________________________________________________________________________ context_re (Reshape) (None, 512, 1) 0 context_em[0][0] __________________________________________________________________________ output_dot (Dot) (None, 1, 1) 0 target_re[0][0] context_re[0][0] __________________________________________________________________________ output_re (Reshape) (None, 1) 0 output_dot[0][0] __________________________________________________________________________ output_sig (Dense) (None, 1) 2 output_re[0][0] ========================================================================== Total params: 10,240,002 Trainable params: 10,240,002 Non-trainable params: 0 __________________________________________________________________________ ``` 1. 接下來,訓練模型。我們只訓練了 5 個周期,但你應該嘗試更多的周期,至少 1000 或 10,000 個周期。 請記住,這將需要幾個小時,因為這不是最優化的代碼。 歡迎您使用本書和其他來源的提示和技巧進一步優化代碼。 ```py n_epochs = 5 batch_size = 1024 model.fit([x,y],labels,batch_size=batch_size, epochs=n_epochs) ``` 讓我們根據這個模型發現的單詞向量打印單詞的相似度: ```py # print closest words to validation set at end of training top_k = 5 y_val = np.arange(ptb.vocab_len, dtype=np.int32) y_val = dsu.to2d(y_val,unit_axis=1) for i in range(valid_size): x_val = np.full(shape=(ptb.vocab_len,1),fill_value=x_valid[i], dtype=np.int32) similarity_scores = similarity_model.predict([x_val,y_val]) similarity_scores=similarity_scores.flatten() similar_words = (-similarity_scores).argsort()[1:top_k + 1] similar_str = 'Similar to {0:}:'.format(ptb.id2word[x_valid[i]]) for k in range(top_k): similar_str = '{0:} {1:},'.format(similar_str, ptb.id2word[similar_words[k]]) print(similar_str) ``` 我們得到以下輸出: ```py Similar to we: rake, kia, sim, ssangyong, memotec, Similar to been: nahb, sim, rake, punts, rubens, Similar to also: photography, snack-food, rubens, nahb, ssangyong, Similar to of: isi, rake, memotec, kia, mlx, Similar to last: rubens, punts, memotec, sim, photography, Similar to u.s.: mlx, memotec, punts, rubens, kia, Similar to an: memotec, isi, ssangyong, rake, sim, Similar to trading: rake, rubens, swapo, mlx, nahb, ``` 到目前為止,我們已經看到了如何使用 TensorFlow 及其高級庫 Keras 創建單詞向量或嵌入。現在讓我們看看如何使用 TensorFlow 和 Keras 來學習模型并將模型應用于一些與 NLP 相關的任務的預測。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看