<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # Tokenizers(分詞器) 一個**?tokenizer**(分詞器)接收一個字符流,將之分割為獨立的?**tokens**(詞元,通常是獨立的單詞),然后輸出?**tokens**?流。 例如,[whitespace](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-whitespace-tokenizer.html)?**tokenizer**?遇到空白字符時分割文本。它會將文本?"**Quick brown fox!**"?分割為?[**Quick**,?**brown**,?**fox!**]。 該**?tokenizer**(分詞器)還負責記錄各個?**term**(詞條)的順序或?**position**?位置(用于?**phrase?**短語和?**word**?**proximity?**詞近鄰查詢),以及?**term**(詞條)所代表的原始?**word**(單詞)的?**start**(起始)和?**end**(結束)的?**character offsets**(字符偏移量)(用于高亮顯示搜索的內容)。 **Elasticsearch**?提供了很多內置的分詞器,可以用來構建?[custom analyzers](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-custom-analyzer.html "Custom Analyzer")(自定義分詞器)。 ### Word Oriented Tokenizers(整詞分詞器) 下列的分詞器通常是將整個文本分為獨立的單詞 :? [Standard Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-standard-tokenizer.html)?(標準分詞器) **standard?tokenizer**?根據?**Unicode**?文本分割算法,以單詞邊界分割文本。它刪除大多數標點符號。 它是大多數語言的最佳選擇。 [Letter Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-letter-tokenizer.html) **letter?tokenizer**?遇到非字母時分割文本。 [Lowercase Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-lowercase-tokenizer.html)(小寫分詞器) **lowercase?tokenizer**?類似?**letter?tokenizer**,遇到非字母時分割文本,同時會將所有分割后的詞元轉為小寫。 [Whitespace Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-whitespace-tokenizer.html) **whitespace?tokenizer**?遇到空白字符時分割位文本。 [UAX URL Email Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-uaxurlemail-tokenizer.html) **uax_url_email?tokenizer**?類似?**standard?tokenizer**,只不過它會把?**URL**?和**email?**地址當成一個詞元。 [Classic Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-classic-tokenizer.html) **classic?tokenizer**?是一個基于英語語法的分詞器。 [Thai Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-thai-tokenizer.html)(泰語分詞器) **thai?tokenizer**?將泰文文本分成單詞。 ### Partial Word Tokenizers(局部單詞分詞器) 這些分詞器將文本或者單詞分割為小片段,用于?**partial**?**word**(局部單詞)的匹配 :? [N-Gram Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-ngram-tokenizer.html) **ngram?tokenizer**?遇到指定的字符(如 : 空白、標點)時分割文本,然后返回各個單詞的?**n-grams**(連續字符的滑動窗口)。例如??**quick?**→?[**qu**,?**ui**,?**ic**,**ck**]。 [Edge N-Gram Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-edgengram-tokenizer.html) **edge_ngram?tokenizer?**遇到指定的字符(如 : 空白、標點)時分割文本,然后它返回錨定到單詞開頭的?**n-gram**。例如???**quick?**→?[**q**,?**qu**,?**qui**,?**quic**,**quick**]。 ### Structured Text Tokenizers(結構化文本分詞器) 下列的分詞器通常用于結構化文本,如 身份證、郵箱、郵政編碼、文件路徑 :? [Keyword Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-keyword-tokenizer.html) **keyword?tokenizer**?什么都沒做,它將整個文本當成一個 詞元。一般和**token**?過濾器(例如??[lowercase](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-lowercase-tokenfilter.html))一起使用,規范分詞后的詞元。 [Pattern Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-pattern-tokenizer.html)(正則分詞器) **pattern?tokenizer**?使用正則表達式,在遇到單詞分隔符時分割文本,或者將捕獲到的匹配文本當成一個詞元。 [Path Tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.3/analysis-pathhierarchy-tokenizer.html)(路徑層次分詞器) **path_hierarchy?tokenizer**?把分層的值看成是文件路徑,用路徑分隔符分割文本,輸出樹上的各個節點。例如??**/foo/bar/baz**?→?[**/foo**,?**/foo/bar**,
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看