<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                [TOC] ## 開箱即用 * `feature-extraction`(獲得文本的向量化表示) * `fill-mask`(填充被遮蓋的詞、片段) * `ner`(命名實體識別) * `question-answering`(自動問答) * `sentiment-analysis`(情感分析) * `summarization`(自動摘要) * `text-generation`(文本生成) * `translation`(機器翻譯) * `zero-shot-classification`(零訓練樣本分類) 1. 可以直接在 pipline 中使用以上 模型, 會自動選擇最合適的模型進行下載 2. 也就是說,上述 pipeline 并不是模型本身,只是分類 3. 可指定具體模型 `generator = pipeline("text-generation", model="distilgpt2") ` ## 示例 ### 情感分析 ``` from transformers import pipeline classifier = pipeline("sentiment-analysis") result = classifier("I've been waiting for a HuggingFace course my whole life.") print(result) results = classifier( ["I've been waiting for a HuggingFace course my whole life.", "I hate this so much!"] ) print(results) ``` 輸出 ``` No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) [{'label': 'POSITIVE', 'score': 0.9598048329353333}] [{'label': 'POSITIVE', 'score': 0.9598048329353333}, {'label': 'NEGATIVE', 'score': 0.9994558691978455}] ``` 自動選擇了 base-uncased-finetuned-sst-2-english 進行處理 ### 零訓練樣本分類 ``` from transformers import pipeline classifier = pipeline("zero-shot-classification") result = classifier( "This is a course about the Transformers library", candidate_labels=["education", "politics", "business"], ) print(result) ``` 輸出 ``` No model was supplied, defaulted to facebook/bart-large-mnli (https://huggingface.co/facebook/bart-large-mnli) {'sequence': 'This is a course about the Transformers library', 'labels': ['education', 'business', 'politics'], 'scores': [0.8445973992347717, 0.11197526752948761, 0.043427325785160065]} ``` 自動選擇了 facebook/bart-large-mnli 進行處理 ### 文本生成 #### 英文生成 ``` from transformers import pipeline generator = pipeline("text-generation") results = generator("In this course, we will teach you how to") print(results) results = generator( "In this course, we will teach you how to", num_return_sequences=2, max_length=50 ) print(results) ``` 輸出 ``` No model was supplied, defaulted to gpt2 (https://huggingface.co/gpt2) [{'generated_text': "In this course, we will teach you how to use data and models that can be applied in any real-world, everyday situation. In most cases, the following will work better than other courses I've offered for an undergrad or student. In order"}] [{'generated_text': 'In this course, we will teach you how to make your own unique game called "Mono" from scratch by doing a game engine, a framework and the entire process starting with your initial project. We are planning to make some basic gameplay scenarios and'}, {'generated_text': 'In this course, we will teach you how to build a modular computer, how to run it on a modern Windows machine, how to install packages, and how to debug and debug systems. We will cover virtualization and virtualization without a programmer,'}] ``` ### 古詩詞生成 ``` from transformers import pipeline generator = pipeline("text-generation", model="uer/gpt2-chinese-poem") results = generator( "[CLS] 萬 疊 春 山 積 雨 晴 ,", max_length=40, num_return_sequences=2, ) print(results) ``` 輸出 ``` [{'generated_text': '[CLS] 萬 疊 春 山 積 雨 晴 , 孤 舟 遙 送 子 陵 行 。 別 情 共 嘆 孤 帆 遠 , 交 誼 深 憐 一 座 傾 。 白 日 風 波 身 外 幻'}, {'generated_text': '[CLS] 萬 疊 春 山 積 雨 晴 , 滿 川 煙 草 踏 青 行 。 何 人 喚 起 傷 春 思 , 江 畔 畫 船 雙 櫓 聲 。 桃 花 帶 雨 弄 晴 光'}] ``` ### 命名實體識別 命名實體識別 (NER) pipeline 負責從文本中抽取出指定類型的實體,例如人物、地點、組織等等。 ``` from transformers import pipeline ner = pipeline("ner", grouped_entities=True) results = ner("My name is Sylvain and I work at Hugging Face in Brooklyn.") print(results) ``` ### 自動問答 * **抽取式 QA (extractive QA):**假設答案就包含在文檔中,因此直接從文檔中抽取答案; * **多選 QA (multiple-choice QA):**從多個給定的選項中選擇答案,相當于做閱讀理解題; * **無約束 QA (free-form QA):**直接生成答案文本,并且對答案文本格式沒有任何限制。 #### 抽取式 ``` from transformers import pipeline question_answerer = pipeline("question-answering") answer = question_answerer( question="Where do I work?", context="My name is Sylvain and I work at Hugging Face in Brooklyn", ) print(answer) ``` 輸出 ``` No model was supplied, defaulted to distilbert-base-cased-distilled-squad (https://huggingface.co/distilbert-base-cased-distilled-squad) {'score': 0.6949771046638489, 'start': 33, 'end': 45, 'answer': 'Hugging Face'} ``` ### 自動摘要
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看