<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                ## EFK、ELK **author:xiak** **last update: 2022-10-15 10:12:22** ---- [TOC=3,8] ### 介紹 #### Elasticsearch 安裝 無論您正在查找來自特定 IP 地址的活動,還是正在分析交易請求數量為何突然飆升,或者正在方圓一公里內搜尋美食店,我們嘗試解決的**這些問題歸根結底都是搜索問題**。通過 Elasticsearch,您可以快速存儲、搜索和分析大量數據。[Elastic Stack:Elasticsearch、Kibana、Beats 和 Logstash | Elastic](https://www.elastic.co/cn/elastic-stack/) ```shell cd /opt mkdir elastic cd elastic wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.5.1-linux-x86_64.tar.gz tar -xvzf elasticsearch-8.5.1-linux-x86_64.tar.gz cd elasticsearch-8.5.1 bin/elasticsearch ``` ~~~shell vi config/jvm.options -Xms500m -Xmx500m ~~~ ```shell groupadd elsearch useradd elsearch -g elsearch -p 123456 chown -R elsearch:elsearch elasticsearch-8.5.1 su elsearch cd elasticsearch-8.5.1 bin/elasticsearch [-d] ``` https://blog.csdn.net/liuxiangke0210/article/details/113992511 ~~~ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ? Elasticsearch security features have been automatically configured! ? Authentication is enabled and cluster connections are encrypted. ?? Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`): vw0soI_6WZM2FDi6e1+I ?? HTTP CA certificate SHA-256 fingerprint: b1f6743383728b4d20397c8ab3e14480deb56d48bb77d1c88c1f40e7831f5458 ?? Configure Kibana to use this cluster: ? Run Kibana and click the configuration link in the terminal when Kibana starts. ? Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes): eyJ2ZXIiOiI4LjUuMSIsImFkciI6WyIxNzIuMTguNzcuMjI6OTIwMCJdLCJmZ3IiOiJiMWY2NzQzMzgzNzI4YjRkMjAzOTdjOGFiM2UxNDQ4MGRlYjU2ZDQ4YmI3N2QxYzg4YzFmNDBlNzgzMWY1NDU4Iiwia2V5IjoiNUF6TnZJUUJDaEU0MnluTEFreE86ZTNNdDJOMjJROUc4Z0hFYkNEblpzZyJ9 ?? Configure other nodes to join this cluster: ? On this node: ? Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`. ? Uncomment the transport.host setting at the end of config/elasticsearch.yml. ? Restart Elasticsearch. ? On other nodes: ? Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ~~~ #### 配置 使用端口:`9200` ~~~ bin/elasticsearch-reset-password -u elastic Password for the [elastic] user successfully reset. New value: ******* ~~~ ~~~ http://106.15.127.163:9200/ curl http://106.15.127.163:9200 -k name: elastic password: ****** ~~~ ~~~json { "name": "iZuf6918brm8qovci6qai3Z", "cluster_name": "elasticsearch", "cluster_uuid": "dcWP2asXTQeM8VCrNDW2Eg", "version": { "number": "8.5.1", "build_flavor": "default", "build_type": "tar", "build_hash": "c1310c45fc534583afe2c1c03046491efba2bba2", "build_date": "2022-11-09T21:02:20.169855900Z", "build_snapshot": false, "lucene_version": "9.4.1", "minimum_wire_compatibility_version": "7.17.0", "minimum_index_compatibility_version": "7.0.0" }, "tagline": "You Know, for Search" } ~~~ ~~~ curl http://106.15.127.163:9200 { "name" : "iZuf6918brm8qovci6qai3Z", "cluster_name" : "elasticsearch", "cluster_uuid" : "dcWP2asXTQeM8VCrNDW2Eg", "version" : { "number" : "8.5.1", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "c1310c45fc534583afe2c1c03046491efba2bba2", "build_date" : "2022-11-09T21:02:20.169855900Z", "build_snapshot" : false, "lucene_version" : "9.4.1", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" } ~~~ ---- vi config/elasticsearch.yml ~~~ # Enable security features xpack.security.enabled: false xpack.security.enrollment.enabled: true # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents xpack.security.http.ssl: enabled: false keystore.path: certs/http.p12 network.host: 0.0.0.0 http.port: 9200 ~~~ ---- ### kibana 安裝 ~~~ bin/elasticsearch-service-tokens create elastic/kibana kibana-token warning: ignoring JAVA_HOME=/usr/local/java/jdk-11.0.13+8; using bundled JDK SERVICE_TOKEN elastic/kibana/kibana-token = AAEAAWVsYXN0aWMva2liYW5hL2tpYmFuYS10b2tlbjp1ZmtBXy1ENFF1Q1Y1bDVnVlNfZkN3 ~~~ 使用端口:`5601` https://www.elastic.co/cn/downloads/kibana ```shell wget https://artifacts.elastic.co/downloads/kibana/kibana-8.5.2-linux-x86_64.tar.gz tar -xvzf kibana-8.5.2-linux-x86_64.tar.gz cd kibana-8.5.2 bin/kibana --allow-root # 后臺啟動 nohup ./bin/kibana & ``` ~~~ http://106.15.127.163:5601/?code=433185 ~~~ - [Index Management - Elastic](http://106.15.127.163:5601/app/management/data/index_management/indices) - [Data Views - Elastic](http://106.15.127.163:5601/app/management/kibana/dataViews) - [Discover - Elastic](http://106.15.127.163:5601/app/discover) - [Logs | Stream - Kibana](http://106.15.127.163:5601/app/logs/stream) - [Console - Dev Tools - Elastic](http://106.15.127.163:5601/app/dev_tools#/console) ---- ### Logstash 安裝 使用端口:`5044` https://www.elastic.co/cn/downloads/logstash ~~~ wget https://artifacts.elastic.co/downloads/logstash/logstash-8.5.2-linux-x86_64.tar.gz tar -xvzf logstash-8.5.2-linux-x86_64.tar.gz cd logstash-8.5.2 # 測試是否能夠正常工作 bin/logstash -e 'input { stdin { } } output { stdout {} }' ~~~ vim config/sc.conf ~~~ input { stdin {} } output { stdout { codec => rubydebug {} } elasticsearch { hosts => "127.0.0.1:9200" } } ~~~ ```shell nohup bin/logstash -f ../config/sc.conf > sc.log 2>&1 & ``` vi config/logstash-sample2.conf ~~~ input { file { path => ['/opt/elastic/elasticsearch-8.5.1/logs/*.log'] type => 'es_log' start_position => "beginning" } } output { elasticsearch { hosts => ["http://localhost:9200"] index => "elasticsearch_log-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } ~~~ ```shell bin/logstash -f /opt/elastic/logstash-8.5.2/config/logstash-sample2.conf nohup bin/logstash -f /opt/elastic/logstash-8.5.2/config/logstash-sample3.conf > /opt/logstash.log 2>&1 & ``` > 但是這樣會有一個問題,logstash如果添加插件,全部的都要進行添加,會給運維人員造成很大的問題,所以就有了上邊提到的FileBeat,占用資源少,只負責采集日志,不做其他的事情,這樣就輕量級,把Logstash抽出來,做一些濾處理之類的工作。[ELK詳細安裝教程_壹升茉莉清的博客-CSDN博客_elk安裝](https://blog.csdn.net/weixin_40920359/article/details/126240405) ---- ### Filebeat 安裝 https://www.elastic.co/cn/downloads/beats/filebeat ```shell wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.5.2-linux-x86_64.tar.gz tar -xvzf filebeat-8.5.2-linux-x86_64.tar.gz cd filebeat-8.5.2-linux-x86_64 ./filebeat -e -c filebeat.yml ``` ```shell nohup ./filebeat -e -c filebeat3.yml > /opt/filebeat.log 2>&1 & ``` > 解決 nohup 還是無法常駐后臺,終端關閉自動退出的問題。 https://www.cnblogs.com/luoyunfei99/articles/16188714.html ~~~ vi /etc/systemd/system/filebeat.service [Unit] Description=Filebeat is a lightweight shipper for metrics. Documentation=https://www.elastic.co/products/beats/filebeat Wants=network-online.target After=network-online.target [Service] Environment="LOG_OPTS=-e" Environment="CONFIG_OPTS=-c /opt/filebeat-8.5.2-linux-x86_64/filebeat3.yml" ExecStart=/opt/filebeat-8.5.2-linux-x86_64/filebeat_0.4.1_linux_amd64 $LOG_OPTS $CONFIG_OPTS Restart=always StandardOutput=/opt/filebeat.log [Install] WantedBy=multi-user.target ~~~ ~~~ chmod +x /etc/systemd/system/filebeat.service systemctl daemon-reload systemctl enable filebeat systemctl start filebeat systemctl restart filebeat systemctl stop filebeat systemctl status filebeat ps -ef | grep filebeat ~~~ ~~~ vi filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /home/myweb/apps_share_data/admin.api.test.xxx.cn/runtime/log/admin/smartpark/202212/*.log # ---------------------------- Elasticsearch Output ---------------------------- #output.elasticsearch: # Array of hosts to connect to. #hosts: ["106.15.127.163:9200"] # Protocol - either `http` (default) or `https`. #protocol: "http" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "elastic" #password: "****" # ------------------------------ Logstash Output ------------------------------- output.logstash: # The Logstash hosts hosts: ["106.15.127.163:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" ~~~ ---- ### 安裝 logstash-input-pulsar 和 pulsar-beat-output [filebeat數據重復和截斷及丟失問題分析 - NYC's Blog](http://niyanchun.com/filebeat-truncate-bug.html) [logstash(filebeat)重復推送數據問題 - Elastic 中文社區](https://elasticsearch.cn/question/4622) > 原因找到了,是我filebeat直接給logstash發送數據,而logstash的IO到100%,接收數據阻塞了,導致filebeat沒有收到發送事件的確認消息,導致filebeat重復發送,結果就這樣重復了。您知不知道filebeat怎么關閉確認機制,只發送不確認? 安裝 logstash-input-pulsar: ```shell cd /opt/elastic wget https://github.com/streamnative/logstash-input-pulsar/releases/download/2.7.1/logstash-input-pulsar-2.7.1.zip cd /opt/elastic/logstash-8.5.2 # bin/logstash-plugin install file:///opt/elastic/logstash-input-pulsar-2.7.1.zip bin/logstash-plugin install file:///opt/elastic/logstash-input-pulsar-2.10.0.0.zip ``` 安裝 pulsar-beat-output: ``` wget https://github.com/streamnative/pulsar-beat-output/releases/download/v0.4.1/filebeat_0.4.1_linux_amd64 mv filebeat filebeat_old mv filebeat_0.4.1_linux_amd64 filebeat ``` 安裝 Go Language: https://go.dev/doc/install ~~~ wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz vi /etc/profile export PATH=$PATH:/usr/local/go/bin source /etc/profile go version ~~~ https://goproxy.cn/ 七牛云 - Goproxy.cn 鏡像 ~~~ go env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.cn,direct ~~~ ---- ### 日志格式 > 這些日志都是多行格式的 主應用:tp web、daemon、command ~~~ $runtime = /home/myweb/apps_share_data/admin.api.test.xxx.cn/runtime $runtime/log/admin/smartpark/202212/01.log $runtime/log/screen/202212/01.log $runtime/daemon-pulsar-workerman.log $runtime/log/daemon/iotscene/dispatcher-flow/202212/01_cli.log $runtime/log/command/system/Apifox/202212/01_cli.log ~~~ gatewayworker ~~~ $runtime = /home/myweb/apps_share_data/yf_iot_gatewayworker/runtime $runtime/gatewayworker-DeviceApp-gateway-workerman.log $runtime/gatewayworker-DeviceApp-register-workerman.log $runtime/gatewayworker-DeviceApp-worker-workerman.log ~~~ ~~~ {appname}.api.[test].xxx.cn/[module]/v1.{controller}/{action} 環境: [test].xxx.cn xxx.net ---- tp web: /home/myweb/apps_share_data/admin*/runtime/log/**/*.log tp daemon: /home/myweb/apps_share_data/admin*/runtime/{workerman}.log /home/myweb/apps_share_data/admin*/runtime/log/daemon/{module}/{worker}/*/*_cli.log tp command: /home/myweb/apps_share_data/admin*/runtime/log/command/{module}/{command}/*/*_cli.log tp pay: /home/myweb/apps_share_data/admin*/runtime/log/yansongda-pay-log/{channel}-{mch_id}.log tp sms: /home/myweb/apps_share_data/admin*/runtime/log/sms/{appname}/easy-sms.log ---- gatewayworker: /home/myweb/apps_share_data/*gatewayworker/runtime/{gatewayworker}.log ~~~ 所有日志可分為兩類日志:web、cli **web:** (請求)host(域名、ip),時間,請求方法,url(appname、模塊、控制器、方法),日志level,請求id,設備id,請求ip,請求 referer **cli:** host(域名、ip),時間,PID,日志level,文件名(daemon-pulsar-workerman.log、gatewayworker-DeviceApp-gateway-workerman.log),目錄(daemon/iotscene/dispatcher-flow 、command/system/Apifox) > 按應用來分 ~~~ elasticsearch index: test-web-2022-12 test-cli-2022-12 test-pay-2022-12 test-sms-2022-12 kf-web-2022-12 kf-cli-2022-12 ---- Data Views: test-web test-cli kf-web kf-cli ~~~ ~~~ web: ^[ YYYY-MM-DD ignore: --------------------------------------------------------------- ---- cli: ^[ ---- workerman: ^YYYY-MM-DD HH:ii:ss ~~~ ---- [ELK詳細安裝教程_壹升茉莉清的博客-CSDN博客_elk安裝](https://blog.csdn.net/weixin_40920359/article/details/126240405) [ELK詳細安裝部署_妙軒cc的博客-CSDN博客_elk安裝部署](https://blog.csdn.net/song12345xiao/article/details/125991833) [超詳細的ELK安裝部署 - 墨天輪](https://www.modb.pro/db/109893) [Logstash配置詳解_fyygree的博客-CSDN博客_logstash配置](https://blog.csdn.net/fengyuyeguirenenen/article/details/124036098) [Filebeat + Logstash 配置_sparks.fly的博客-CSDN博客_filebeat 配置logstash](https://blog.csdn.net/m0_60491538/article/details/121636766) [filebeat+logstash配置_'煎餅俠的博客-CSDN博客_filebeat logstash](https://blog.csdn.net/Baron_ND/article/details/109351279) > Logstash依賴于JVM,很慢,但功能豐富,支持對數據做預處理。而filebeat很輕量,Golang開發的,但功能很少,不支持對數據做預處理。因此一般都是組合使用,在每個節點部署filbeat,然后將監控的日志推送到logstash集群內,流量大時通常配合redis或kafka做數據緩沖層來使用。 [logstash中date的時間處理方式總結 - fat_girl_spring - 博客園](https://www.cnblogs.com/fat-girl-spring/p/13044570.html) [logstash關于date時間處理的幾種方式總結 - 峰哥ge - 博客園](https://www.cnblogs.com/FengGeBlog/p/10559034.html) [logstash神器之grok - 簡書](https://www.jianshu.com/p/d3042a08eb5e) [regex - 使用grok將日志文件名添加為logstash中的字段 - IT工具網](https://www.coder.work/article/6685349) [logstash匹配filebeat傳遞的log.file.path_禪劍一如的博客-CSDN博客_log.file.path](https://blog.csdn.net/zsx18273117003/article/details/106383636/) [elasticsearch 修改磁盤比例限制 - luzhouxiaoshuai - 博客園](https://www.cnblogs.com/kebibuluan/p/14077043.html) [Elasticsearch提示low disk watermark [85%] exceeded on [UTyrLH40Q9uIzHzX-yMFXg][Sonofelice][/Users/baidu/Documents/work/soft/data/nodes/0] free: 15.2gb[13.4%], replicas will not be assigned to this node - SonoFelice - 博客園](https://www.cnblogs.com/sonofelice/p/8554887.html) ~~~ [2022-12-04T16:30:02,307][INFO ][o.e.c.r.a.DiskThresholdMonitor] [iZuf6918brm8qovci6qai3Z] low disk watermark [85%] no longer exceeded on [9s7L8PRWRSiDgvnU8LvUYQ][iZuf6918brm8qovci6qai3Z][/opt/elastic/elasticsearch-8.5.1/data] free: 12.5gb[15.9%] ~~~ ~~~ https://www.jianshu.com/p/4e4a7450c305 parsers: - multiline: type: pattern pattern: '^\[' 表示兩層含義: 1. 事件首行以 [ 開頭 2. 將所有不以 [ 開始的行與之前的行進行合并 ~~~ ~~~ https://zhuanlan.zhihu.com/p/141439013 nohup ./filebeat -e -c filebeat.yml -path.data=/opt/data/filebeat >/dev/null 2>&1 & ./filebeat -e -c filebeat3.yml -path.data=/opt/data/filebeat 停止運行 FileBeat 進程 ps -ef | grep filebeat Kill -9 線程號 ~~~ ~~~ https://discuss.elastic.co/t/filebeat-filestream-input-rereading-rotated-log-files/300038/6 We had a similar issue when files were resent when Filebeat was restarted and multiple inputs were configured. The trick was to set an ID. But this issue seems completely unrelated unfortunately. id 必須配置,否則 會影響 registry 造成沖突,丟失狀態,重復采集 ~~~ [Configure inputs | Filebeat Reference [8.5] | Elastic](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html) [Grok filter plugin | Logstash Reference [8.5] | Elastic](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看