<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??碼云GVP開源項目 12k star Uniapp+ElementUI 功能強大 支持多語言、二開方便! 廣告
                # 安裝EFK插件 我們通過在每臺node上部署一個以DaemonSet方式運行的fluentd來收集每臺node上的日志。Fluentd將docker日志目錄`/var/lib/docker/containers`和`/var/log`目錄掛載到Pod中,然后Pod會在node節點的`/var/log/pods`目錄中創建新的目錄,可以區別不同的容器日志輸出,該目錄下有一個日志文件鏈接到`/var/lib/docker/contianers`目錄下的容器日志輸出。 官方文件目錄:`cluster/addons/fluentd-elasticsearch` ``` bash $ ls *.yaml es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml efk-rbac.yaml ``` 同樣EFK服務也需要一個`efk-rbac.yaml`文件,配置serviceaccount為`efk`。 已經修改好的 yaml 文件見:[../manifests/EFK](https://github.com/rootsongjc/kubernetes-handbook/blob/master/manifests/EFK) ## 配置 es-controller.yaml ``` bash $ diff es-controller.yaml.orig es-controller.yaml 24c24 < - image: gcr.io/google_containers/elasticsearch:v2.4.1-2 --- > - image: harbor-001.jimmysong.io/library/elasticsearch:v2.4.1-2 ``` ## 配置 es-service.yaml 無需配置; ## 配置 fluentd-es-ds.yaml ``` bash $ diff fluentd-es-ds.yaml.orig fluentd-es-ds.yaml 26c26 < image: gcr.io/google_containers/fluentd-elasticsearch:1.22 --- > image: harbor-001.jimmysong.io/library/fluentd-elasticsearch:1.22 ``` ## 配置 kibana-controller.yaml ``` bash $ diff kibana-controller.yaml.orig kibana-controller.yaml 22c22 < image: gcr.io/google_containers/kibana:v4.6.1-1 --- > image: harbor-001.jimmysong.io/library/kibana:v4.6.1-1 ``` ## 給 Node 設置標簽 定義 DaemonSet `fluentd-es-v1.22` 時設置了 nodeSelector `beta.kubernetes.io/fluentd-ds-ready=true` ,所以需要在期望運行 fluentd 的 Node 上設置該標簽; ``` bash $ kubectl get nodes NAME STATUS AGE VERSION 172.20.0.113 Ready 1d v1.6.0 $ kubectl label nodes 172.20.0.113 beta.kubernetes.io/fluentd-ds-ready=true node "172.20.0.113" labeled ``` 給其他兩臺node打上同樣的標簽。 ## 執行定義文件 ``` bash $ kubectl create -f . serviceaccount "efk" created clusterrolebinding "efk" created replicationcontroller "elasticsearch-logging-v1" created service "elasticsearch-logging" created daemonset "fluentd-es-v1.22" created deployment "kibana-logging" created service "kibana-logging" created ``` ## 檢查執行結果 ``` bash $ kubectl get deployment -n kube-system|grep kibana kibana-logging 1 1 1 1 2m $ kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana' elasticsearch-logging-v1-mlstp 1/1 Running 0 1m elasticsearch-logging-v1-nfbbf 1/1 Running 0 1m fluentd-es-v1.22-31sm0 1/1 Running 0 1m fluentd-es-v1.22-bpgqs 1/1 Running 0 1m fluentd-es-v1.22-qmn7h 1/1 Running 0 1m kibana-logging-1432287342-0gdng 1/1 Running 0 1m $ kubectl get service -n kube-system|grep -E 'elasticsearch|kibana' elasticsearch-logging 10.254.77.62 <none> 9200/TCP 2m kibana-logging 10.254.8.113 <none> 5601/TCP 2m ``` kibana Pod 第一次啟動時會用較長時間(10-20分鐘)來優化和 Cache 狀態頁面,可以 tailf 該 Pod 的日志觀察進度: ``` bash $ kubectl logs kibana-logging-1432287342-0gdng -n kube-system -f ELASTICSEARCH_URL=http://elasticsearch-logging:9200 server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging {"type":"log","@timestamp":"2017-04-12T13:08:06Z","tags":["info","optimize"],"pid":7,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"} {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["info","optimize"],"pid":7,"message":"Optimization of bundles for kibana and statusPage complete in 610.40 seconds"} {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:18Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"} {"type":"log","@timestamp":"2017-04-12T13:18:24Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-04-12T13:18:29Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"} ``` ## 訪問 kibana 1. 通過 kube-apiserver 訪問: 獲取 monitoring-grafana 服務 URL ``` bash $ kubectl cluster-info Kubernetes master is running at https://172.20.0.113:6443 Elasticsearch is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging Heapster is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/heapster Kibana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging KubeDNS is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard monitoring-grafana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana monitoring-influxdb is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb ``` 瀏覽器訪問 URL: `https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana` 2. 通過 kubectl proxy 訪問: 創建代理 ``` bash $ kubectl proxy --address='172.20.0.113' --port=8086 --accept-hosts='^*$' Starting to serve on 172.20.0.113:8086 ``` 瀏覽器訪問 URL:`http://172.20.0.113:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging` 在 Settings -> Indices 頁面創建一個 index(相當于 mysql 中的一個 database),選中 `Index contains time-based events`,使用默認的 `logstash-*` pattern,點擊 `Create` ; **可能遇到的問題** 如果你在這里發現Create按鈕是灰色的無法點擊,且Time-filed name中沒有選項,fluentd要讀取`/var/log/containers/`目錄下的log日志,這些日志是從`/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log`鏈接過來的,查看你的docker配置,`—log-dirver`需要設置為**json-file**格式,默認的可能是**journald**,參考[docker logging](https://docs.docker.com/engine/admin/logging/overview/#examples)。 ![es-setting](https://box.kancloud.cn/f2517d06d97cbc03f8d8ac15d6b2efbc_2554x1318.png) 創建Index后,可以在 `Discover` 下看到 ElasticSearch logging 中匯聚的日志; ![es-home](https://box.kancloud.cn/893af0dd8057e7cf28bb5d4df7f1adc2_3230x1850.jpg)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看