<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??碼云GVP開源項目 12k star Uniapp+ElementUI 功能強大 支持多語言、二開方便! 廣告
                >翻譯,原文地址:https://aggarwalarpit.wordpress.com/2015/12/03/configuring-elk-stack-to-analyse-apache-tomcat-logs/ * 原作者使用9201端口,官方默認是9200,我做了修改 * 原作者默認你已經安裝了apache tomcat * 譯者博客:http://www.zimug.com # 配置ELK技術棧來分析apache tomcat日志 > Posted on December 3, 2015 by Arpit Aggarwal 在這篇文章,我將安裝ElasticSearch, Logstash and Kibana分析Apache Tomcat服務日志。在安裝之前,對各個組件做個簡介! * ElasticSearch 有強大的搜索功能的無模式數據庫,可以簡單的很想擴展,索引每一個字段,可以聚合分組數據。 * Logstash 用Ruby編寫的,我們可以使用管道輸入和輸出數據到任何位置。一個可以抓取,轉換,存儲事件到ElasticSearch的ETL管道。打包版本在JRuby上運行,并使用幾十個線程進行并行的數據處理,利用了JVM的線程功能。 * Kibana 基于web的數據分析,為ElasticSearch儀表板的工具。充分利用ElasticSearch的搜索功能,以秒為單位可視化數據。支持Lucene的查詢字符串的語法和Elasticsearch的過濾功能。 下面,我將開始分別安裝技術棧中的每一個組件,下面是步驟: ## 步驟 1: 下載并解壓 ElasticSearch .tar.gz 到一個目錄下, 我下載的是 elasticsearch-2.1.0.tar.gz 并且以elasticsearch為文件名解壓到/Users/ArpitAggarwal/目錄下面 ## 步驟 2: 在bin目錄下以./elasticsearch啟動elasticsearch服務,如下: ``` $ cd /Users/ArpitAggarwal/elasticsearch/elasticsearch-2.1.0/bin $ ./elasticsearch ``` 上面的命令啟動的elasticsearch可以通過 [http://localhost:9200/](http://localhost:9200/) 訪問,默認的索引訪問地址 [http://localhost:9200/_cat/indices?v](http://localhost:9200/_cat/indices?v) 如果刪除索引(所有),通過如下命令行: ``` curl -XDELETE 'http://localhost:9200/*/' ``` ## 步驟 3: 下面,我們安裝配置kibana,指向我們的ElasticSearch實例,同樣需要下載并解壓.tar.gz到目錄下,我的是kibana-4.3.0-darwin-x64.tar.gz,以kibana為文件名解壓到/Users/ArpitAggarwal/ ## 步驟 4: 修改/Users/ArpitAggarwal/kibana/kibana-4.3.0-darwin-x64/config/kibana.yml配置,指向本地的ElasticSearch實例,替換elasticsearch.url的值為http://localhost:9200 ## 步驟 5: 通過bin目錄下的./kibana啟動kibana,如下: ``` $ cd /Users/ArpitAggarwal/kibana/kibana-4.3.0-darwin-x64/bin $ ./kibana ``` 可以通過 [http://localhost:5601/](http://localhost:5601/) 訪問kibana ## 步驟 6: 下面,我們安裝配置Nginx,指向我們的Kibana實例。同樣需要下載并解壓.tar.gz到目錄下,我的是nginx-1.9.6.tar.gz,以nginx為文件名解壓到/Users/ArpitAggarwal/,命令如下: ``` $ cd nginx-1.9.6 $ ./configure $ make $ make install ``` 默認情況下,Nginx將被安裝到/usr/local/nginx,但是Nginx提供了指定目錄安裝的方法,使用--prefix選項。如下: ``` ./configure --prefix=/Users/ArpitAggarwal/nginx ``` Next, open the nginx configuration file at /Users/ArpitAggarwal/nginx/conf/nginx.conf and replace location block under server with below content: 下面,我們打開配置文件/Users/ArpitAggarwal/nginx/conf/nginx.conf,然后替換location段中的配置,內容如下: ``` location / { # 指向kiban本地實例 proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } ``` ## 步驟 7: 啟動 Nginx, 如下: ``` cd /Users/ArpitAggarwal/nginx/sbin ./nginx ``` 可以通過 http://localhost 訪問nginx ## 步驟 8: 下面,我們安裝Logstash,執行如下命令: ``` ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null brew install logstash ``` 上面的命令安裝 Logstash 到 /usr/local/opt/。(譯者注:此安裝方法較特殊,建議參考官網方法) ## 步驟 9: 我們需要讓Logstash將數據從tomcat server日志目錄送到ElasticSearch。創建一個目錄,我們將創建logstash配置文件,我將它放到/Users/ArpitAggarwal/下,如下: ``` cd /Users/ArpitAggarwal/ mkdir logstash patterns cd logstash touch logstash.conf cd ../patterns touch grok-patterns.txt ``` 將如下內容復制到:logstash.conf: ``` input { file { path => "/Users/ArpitAggarwal/tomcat/logs/*.log*" start_position => beginning type=> "my_log" } } filter { multiline { patterns_dir => "/Users/ArpitAggarwal/logstash/patterns" pattern => "\[%{TOMCAT_DATESTAMP}" what => "previous" } if [type] == "my_log" and "com.test.controller.log.LogController" in [message] { mutate { add_tag => [ "MY_LOG" ] } if "_grokparsefailure" in [tags] { drop { } } date { match => [ "timestamp", "UNIX_MS" ] target => "@timestamp" } } else { drop { } } } output { stdout { codec => rubydebug } if [type] == "my_log" { elasticsearch { manage_template => false host => localhost protocol => http port => "9201" } } } ``` 下面,將https://github.com/elastic/logstash/blob/v1.2.2/patterns/grok-patterns的內容復制到 patterns/grok-patterns.txt # 步驟10: 使用如下命令,檢查logstash的配置 ``` $ cd /usr/local/opt/ $ logstash -f /Users/ArpitAggarwal/logstash/logstash.conf --configtest --verbose —debug ``` # 步驟 11: 啟動Logstash,使數據送到ElasticSearch ``` $ cd /usr/local/opt/ $ logstash -f /Users/ArpitAggarwal/logstash/logstash.conf ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看