<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                ##查詢階段 在初始化_查詢階段_(_query phase_),查詢被向索引中的每個分片副本(原本或副本)廣播。每個分片在本地執行搜索并且建立了匹配document的_優先隊列_(_priority queue_)。 > ####優先隊列 > 一個_優先隊列_(_priority queue_ is)只是一個存有_前n個_(_top-n_)匹配document的有序列表。這個優先隊列的大小由分頁參數from和size決定。例如,下面這個例子中的搜索請求要求優先隊列要能夠容納100個document ``` JavaScript GET /_search { "from": 90, "size": 10 } ``` 這個查詢的過程被描述在圖分布式搜索查詢階段中。 ![Query phase of distributed search](https://box.kancloud.cn/65b842360d518f3125582e51d79b4062_750x337.png) 圖1 分布式搜索查詢階段 查詢階段包含以下三步: 1.客戶端發送一個`search(搜索)`請求給`Node 3`,`Node 3`創建了一個長度為`from+size`的空優先級隊列。 2.`Node 3` 轉發這個搜索請求到索引中每個分片的原本或副本。每個分片在本地執行這個查詢并且結果將結果到一個大小為`from+size`的有序本地優先隊列里去。 3.每個分片返回document的ID和它優先隊列里的所有document的排序值給協調節點`Node 3`。`Node 3`把這些值合并到自己的優先隊列里產生全局排序結果。 當一個搜索請求被發送到一個節點Node,這個節點就變成了協調節點。這個節點的工作是向所有相關的分片廣播搜索請求并且把它們的響應整合成一個全局的有序結果集。這個結果集會被返回給客戶端。 第一步是向索引里的每個節點的分片副本廣播請求。就像document的`GET`請求一樣,搜索請求可以被每個分片的原本或任意副本處理。這就是更多的副本(當結合更多的硬件時)如何提高搜索的吞吐量的方法。對于后續請求,協調節點會輪詢所有的分片副本以分攤負載。 每一個分片在本地執行查詢和建立一個長度為`from+size`的有序優先隊列——這個長度意味著它自己的結果數量就足夠滿足全局的請求要求。分片返回一個輕量級的結果列表給協調節點。只包含documentID值和排序需要用到的值,例如`_score`。 協調節點將這些分片級的結果合并到自己的有序優先隊列里。這個就代表了最終的全局有序結果集。到這里,查詢階段結束。 整個過程類似于歸并排序算法,先分組排序再歸并到一起,對于這種分布式場景非常適用。 > ###注意 > 一個索引可以由一個或多個原始分片組成,所以一個對于單個索引的搜索請求也需要能夠把來自多個分片的結果組合起來。一個對于 _多(multiple)_或_全部(all)_索引的搜索的工作機制和這完全一致——僅僅是多了一些分片而已。 <!-- === Query Phase During the initial _query phase_, the((("distributed search execution", "query phase")))((("query phase of distributed search"))) query is broadcast to a shard copy (a primary or replica shard) of every shard in the index. Each shard executes the search locally and ((("priority queue")))builds a _priority queue_ of matching documents. .Priority Queue **** A _priority queue_ is just a sorted list that holds the _top-n_ matching documents. The size of the priority queue depends on the pagination parameters `from` and `size`. For example, the following search request would require a priority queue big enough to hold 100 documents: [source,js] -------------------------------------------------- GET /_search { "from": 90, "size": 10 } -------------------------------------------------- **** The query phase process is depicted in <<img-distrib-search>>. [[img-distrib-search]] .Query phase of distributed search image::images/elas_0901.png["Query phase of distributed search"] The query phase consists of the following three steps: 1. The client sends a `search` request to `Node 3`, which creates an empty priority queue of size `from + size`. 2. `Node 3` forwards the search request to a primary or replica copy of every shard in the index. Each shard executes the query locally and adds the results into a local sorted priority queue of size `from + size`. 3. Each shard returns the doc IDs and sort values of all the docs in its priority queue to the coordinating node, `Node 3`, which merges these values into its own priority queue to produce a globally sorted list of results. When a search request is sent to a node, that node becomes the coordinating node.((("nodes", "coordinating node for search requests"))) It is the job of this node to broadcast the search request to all involved shards, and to gather their responses into a globally sorted result set that it can return to the client. The first step is to broadcast the request to a shard copy of every node in the index. Just like <<distrib-read,document `GET` requests>>, search requests can be handled by a primary shard or by any of its replicas.((("shards", "handling search requests"))) This is how more replicas (when combined with more hardware) can increase search throughput. A coordinating node will round-robin through all shard copies on subsequent requests in order to spread the load. Each shard executes the query locally and builds a sorted priority queue of length `from + size`&#x2014;in other words, enough results to satisfy the global search request all by itself. It returns a lightweight list of results to the coordinating node, which contains just the doc IDs and any values required for sorting, such as the `_score`. The coordinating node merges these shard-level results into its own sorted priority queue, which represents the globally sorted result set. Here the query phase ends. [NOTE] ==== An index can consist of one or more primary shards,((("indices", "multi-index search"))) so a search request against a single index needs to be able to combine the results from multiple shards. A search against _multiple_ or _all_ indices works in exactly the same way--there are just more shards involved. ==== -->
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看