<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # 使用 TensorFlow 和 Keras 中的 RNN 模型生成文本 文本生成是 NLP 中 RNN 模型的主要應用之一。針對文本序列訓練 RNN 模型,然后通過提供種子文本作為輸入來生成文本序列。讓我們試試 text8 數據集。 讓我們加載 text8 數據集并打印前 100 個單詞: ```py from datasetslib.text8 import Text8 text8 = Text8() # downloads data, converts words to ids, converts files to a list of ids text8.load_data() print(' '.join([text8.id2word[x_i] for x_i in text8.part['train'][0:100]])) ``` 我們得到以下輸出: ```py anarchism originated as a term of abuse first used against early working class radicals including the diggers of the english revolution and the sans culottes of the french revolution whilst the term is still used in a pejorative way to describe any act that used violent means to destroy the organization of society it has also been taken up as a positive label by self defined anarchists the word anarchism is derived from the greek without archons ruler chief king anarchism as a political philosophy is the belief that rulers are unnecessary and should be abolished although there are differing ``` 在我們的筆記本示例中,我們將數據加載剪切為 5,000 字的文本,因為較大的文本需要高級技術,例如分布式或批量,我們希望保持示例簡單。 ```py from datasetslib.text8 import Text8 text8 = Text8() text8.load_data(clip_at=5000) print('Train:', text8.part['train'][0:5]) print('Vocabulary Length = ',text8.vocab_len) ``` 我們看到詞匯量現在減少到 1,457 個單詞。 ```py Train: [ 8 497 7 5 116] Vocabulary Length = 1457 ``` 在我們的示例中,我們構造了一個非常簡單的單層 LSTM。為了訓練模型,我們使用 5 個單詞作為輸入來學習第六個單詞的參數。輸入層是 5 個字,隱藏層是具有 128 個單元的 LSTM 單元,最后一層是完全連接的層,其輸出等于詞匯量大小。由于我們正在演示這個例子,我們沒有使用單詞向量,而是使用非常簡單的單熱編碼輸出向量。 一旦模型被訓練,我們用 2 個不同的字符串作為生成更多字符的種子來測試它: * `random5`:隨機選擇 5 個單詞生成的字符串。 * `first5`:從文本的前 5 個單詞生成的字符串。 ```py random5 = np.random.choice(n_x * 50, n_x, replace=False) print('Random 5 words: ',id2string(random5)) first5 = text8.part['train'][0:n_x].copy() print('First 5 words: ',id2string(first5)) ``` 我們看到種子串是: ```py Random 5 words: free bolshevik be n another First 5 words: anarchism originated as a term ``` 對于您的執行,隨機種子字符串可能不同。 現在讓我們首先在 TensorFlow 中創建 LSTM 模型。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看