<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                ## 本解決方案的前提是在docker環境下 ### 錯誤詳情: `[type=circuit_breaking_exception, reason=[parent] Data too large, data for [<http_request>] would be [125643918/119.8mb], which is larger than the limit of [90832896/86.6mb], real usage: [125639936/119.8mb], new bytes reserved: [3982/3.8kb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=3982/3.8kb, accounting=15713/15.3kb]] ElasticsearchStatusException[Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [<http_request>] would be [125643918/119.8mb],` 在這里找到了3個數值 Data too large, data for \[\] would be \[125643918/119.8mb\] 這個就是上限內存(缺省是它是ES最大內存的95%) real usage: \[125639936/119.8mb\], ES已經使用的內存 new bytes reserved: \[3982/3.8kb\] 本次查詢需要的內存 自己也百度了一些解答,如下: ### 1。增加ES的JVM內存大小,文件在 /config/jvm.options 把這兩個值改大一點 \-Xms1g \-Xmx1g 改了之后使用docker restart elasticsearch 重啟,還是不行,并且報的數值都差不多,說明內存根本沒有生效 ### 2。修改緩沖區 ``` PUT /_cluster/settings { "persistent": { "indices.breaker.fielddata.limit": "60%" } } PUT /_cluster/settings { "persistent": { "indices.breaker.request.limit": "40%" } } PUT /_cluster/settings { "persistent": { "indices.breaker.total.limit": "70%" } } ``` 也沒用 ### 解決辦法 思路1,我想了一下會不會是docker限制了容器的內存大小,查閱資料后發現docker默認不限制容器內存大小的。 思路2,我百度了docker查看啟動容器的命令的指令`ps -fe`,發現啟動ES的命令行中發現了 -Xms64m -Xmx128m ,然后結合內存不夠的數值,也差不多能對上,那肯定就是docker啟動命令的問題了,雖然你修改了ES在JVM文件中的內存大小,但是可能 docker restart 重啟的時候還是用的docker run的指令,所以導致設置的內存大小沒有應用成功。 解決: docker stop 容器 停止容器 docker rm 容器 刪除容器 最后在命令行重新指定內存大小 docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms3g -Xmx3g" -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.4.2
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看