<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                企業??AI智能體構建引擎,智能編排和調試,一鍵部署,支持知識庫和私有化部署方案 廣告
                > ##### [感謝TTA0168](https://zhidao.baidu.com/question/1174224044228167419.html "參考站點") > ##### [感謝Mr. Hu](http://www.cnblogs.com/huhangfei/p/6904994.html "感謝Mr. Hu") > ##### [感謝irow10](http://irow10.blog.51cto.com/ "感謝irow10") > ##### [感謝石瞳禪](http://www.cnblogs.com/stozen/p/5638369.html "感謝石瞳禪"),grok各個規則中文注釋 > ##### [感謝飛走不可](http://www.cnblogs.com/hanyifeng/p/5871150.html "感謝飛走不可"),Kibana修改字段單位 寫在前面:有時候我們需要分析請求的URL或請求的字符串(?后面的部分),這部分我是在Tomcat中分析,因為Tomcat有專門的query?string字段,不需要單獨拆解Nginx日志,所以在Nginx中,我只保留了http請求的版本 對于json過濾,grok過濾等不同的過濾方法,我是這樣認為的,grok是萬能的,但是如果原始數據能夠json化,那優先使用json ### Logstash配置(直接處理轉發ES) ```yaml input { redis { host => "192.168.0.106" port => "6400" db => 0 key => "filebeat" password => "ding" data_type => "list" } } filter { if [type] == "proxy-nginx-accesslog" { json { source => "message" remove_field => [ "message" ] } mutate { split => { "request" => " " } } mutate { add_field => { "httpversion" => "%{[request][2]}" } } geoip { source => "xff" database => "/etc/logstash/GeoLite2-City.mmdb" fields => ["city_name", "continent_code", "country_code2", "country_code3", "country_name", "dma_code", "ip", "latitude", "longitude", "postal_code", "region_name", "timezone", "location"] remove_field => [ "[geoip][latitude]", "[geoip][longitude]" ] target => "geoip" } } if [type] == "nginx-accesslog" { json { source => "message" remove_field => [ "message" ] } mutate { split => { "request" => " " } } mutate { add_field => { "httpversion" => "%{[request][2]}" } } mutate { split => { "xff" => "," } } mutate { add_field => { "realip" => "%{[xff][0]}" } } geoip { source => "realip" database => "/etc/logstash/GeoLite2-City.mmdb" fields => ["city_name", "continent_code", "country_code2", "country_code3", "country_name", "dma_code", "ip", "latitude", "longitude", "postal_code", "region_name", "timezone", "location"] remove_field => [ "[geoip][latitude]", "[geoip][longitude]" ] target => "geoip" } } if [type] == "tomcat-accesslog" { json { source => "message" remove_field => [ "message" ] } mutate { split => { "method" => " " } } mutate { add_field => { "request_method" => "%{[method][0]}" "request_url" => "%{[method][1]}" "httpversion" => "%{[method][2]}" } } mutate { remove_field => [ "method" ] } } mutate { convert => [ "status", "integer" ] convert => [ "body_bytes_sent", "integer" ] convert => [ "request_time", "float" ] convert => [ "send bytes", "integer" ] } } output { if [type] == "proxy-nginx-accesslog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "logstash-proxy-nginx-accesslog-%{+YYYY.MM.dd}" } } if [type] == "proxy-nginx-errorlog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "logstash-proxy-nginx-errorlog-%{+YYYY.MM.dd}" } } if [type] == "nginx-accesslog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "logstash-nginx-accesslog-%{+YYYY.MM.dd}" } } if [type] == "nginx-errorlog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "logstash-nginx-errorlog-%{+YYYY.MM.dd}" } } if [type] == "systemlog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "systemlog-%{+YYYY.MM.dd}" } } if [type] == "tomcat-catalina" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "tomcat-cataline-%{+YYYY.MM.dd}" } } if [type] == "tomcat-ding-info" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "tomcat-ding-info-%{+YYYY.MM.dd}" } } if [type] == "tomcat-dinge-error" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "tomcat-ding-error-%{+YYYY.MM.dd}" } } if [type] == "tomcat-accesslog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "tomcat-accesslog-%{+YYYY.MM.dd}" } } } ``` 因為拆分了字段,原始字段就可以刪除,這里要注意需要單獨寫刪除代碼跨,這里涉及優先級的問題,具體問題可以自己嘗試。 如果Nginx上層有代理的話,xff字段中會是多個IP,我選擇拆分字段,后保留第一個IP,但保留原始字段。 ### Logstash配置(消費Redis數據) ```yaml input { redis { host => "192.168.0.106" port => "6400" db => 0 key => "filebeat" password => "ding" data_type => "list" } } ``` 其他配置同上,寫入配置請看《日志收集配置》章節 Logstash會根據filebeat中數據的type進行分析,不需要改動 ### Logstash配置(分析IIS日志) ```yaml input { beats { port => 5045 } } filter { if [type] == "iislog" { grok { match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} (%{NOTSPACE:s_sitename}|-) (%{NOTSPACE:s_computername}|-) (%{IPORHOST:s_ip}|-) (%{WORD:cs_method}|-) %{NOTSPACE:cs_uri_stem} %{NOTSPACE:cs_uri_query} (%{NUMBER:s_port}|-) (%{NOTSPACE:cs_username}|-) (%{IPORHOST:c_ip}|-) (?:HTTP/%{NUMBER:http_version}) %{NOTSPACE:cs_useragent} (%{GREEDYDATA:cs_cookie}| -) (%{NOTSPACE:cs_referer}|-) %{NOTSPACE:cs_host} (%{NUMBER:sc_status}|-) (%{NUMBER:sc_substatus}|-) (%{NUMBER:sc_win32_status}|-) (%{NUMBER:sc_bytes}|-) (%{NUMBER:cs_bytes}|-) (%{NUMBER:time_taken}|-)"} add_tag => "iis" remove_field => ["message", "@version"] } date { match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ] timezone => "Etc/GMT" } useragent { source => "cs_useragent" target => "ua" remove_field => ["cs_useragent"] } geoip { source => "c_ip" database => "/etc/logstash/GeoLite2-City.mmdb" fields => ["city_name", "continent_code", "country_code2", "country_code3", "country_name", "dma_code", "ip", "latitude", "longitude", "postal_code", "region_name", "timezone", "location"] remove_field => [ "[geoip][latitude]", "[geoip][longitude]" ] target => "geoip" } mutate { convert => [ "sc_bytes", "integer" ] convert => [ "cs_bytes", "integer" ] convert => [ "time_taken", "float" ] convert => [ "sc_status", "integer" ] convert => [ "s_port", "integer" ] } } } output { if [type] == "iislog" { elasticsearch { hosts => ["192.168.0.231:9200", "192.168.0.232:9200"] index => "logstash-iislog-%{+YYYY.MM.dd}" } } } ``` #### 經驗1:減少配置內容,更加易讀 gork中的匹配規則一旦固定下下,最終放到指定目錄中,配置中直接調用 ```shell /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.1.2/patterns/iis ``` 實際內容 ```shell IIS_LOG %{TIMESTAMP_ISO8601:log_timestamp} ...略... (%{NUMBER:time_taken}|-) ``` Logstash中調用 ```shell match => { "message" => "%{IIS_LOG}" } ``` #### 經驗2:日志過濾 IIS日志中的前4行是"#"開頭,減少Logstash的工作,在Filebeat中配置 #### 經驗3:grok的調試 ```shell 調試站點:http://grokdebug.herokuapp.com ``` ##### grok排錯思路: - ##### 注意匹配時,字段間的空格 - ##### 要考慮字段默認值 - ##### 了解內置正則含義,請參考文檔開始的鏈接 - ##### 注意一定要嘗試多樣本,生產數據 ```shell 比如cookie,最終,(%{NOTSPACE:cookie}|-)生產不適用,(%{GREEDYDATA:cookie}|-)生產適用 ``` #### 經驗4:其他 logstash有時候關的慢,因為在處理數據,等待一會就好了 沒用的字段去掉,但要注意先后順序,拆分后再刪除 時間的處理需要使用Logstash的plugins-filters-date插件 #### 經驗5:IIS日志時區問題 >IIS日志時間為什么晚八小時的原因? 這要從W3C標準說起,W3C是按照GMT時間進行記錄的,IIS默認的日志格式就是W3C標準日志文件。北京時間是東八時區,也就是GMT+8,IIS在記錄時間時就會在系統時間基礎上加上8小時,所以,你那服務器的日志文件記錄的時間久延后了八個小時,但實際上仍然是實時記錄的。解決這個問題的方法是,依次打開Internet信息服務(IIS)---Internet信息服務----本地計算機---網站,右擊子項中相應的網站(如果要設置所有的網站,則直接在網站上點擊),選擇屬性,在網站標簽下找到活動日志格式,打開IIS日志屬性,再選擇常規,最后勾選文件命名和創建使用當地時間 設置當前時間為timezone => "Etc/GMT",kibana會自動根據當前時區轉換時間。 > #### [常用時區](http://php.net/manual/zh/timezones.others.php "常用時區") #### 經驗6:注意字段的轉換 后期使用Kibana出圖時,如果想使用范圍、計算,有些字段需要轉換成integer 可過濾時轉換,也可以結尾同意轉換 如果是計算網絡流量,還需要在kibana中設置字段單位,具體可參照開頭部分的站點。 #### 經驗7:批量刪除索引 ```shell curl -XDELETE 'http://192.168.0.230:9200/abcddd' ``` [https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html) #### 經驗8:Logstash和Filebeat重啟順序 建議先停止Filebeat,再重啟Logstash
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看