<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                ### k8s服務日志收集,一般有兩周模式: **SideCar模式(也是邊車模式)** 每個Pod都附帶一個logging容器用于本Pod內容器的日志收集。 缺點: 1、資源占用多,無論是CPU還是MEM 2、占用后端過多連接數,集群規模越大引起潛在的問題越大。 **Node模式** 每個Node上只會部署一個logging容器用于本Node所有容器的日志收集。 優點: 1、資源占用少,集群規模越大優勢越明顯 2、社區推薦模式 缺點: 需要更智能的logging agent配合。 下面是兩種模式的架構圖: ![](https://img.kancloud.cn/e0/5e/e05e0c62e4c514d93aae85e4b4681d73_1374x1080.png) 這里主要講的是Node模式: kafka+ELK部署參考這篇博客:https://blog.51cto.com/qwer/2607037 接下來部署logging agent服務: log-pilot-kafka.yaml ``` apiVersion: v1 kind: ConfigMap metadata: name: log-pilot2-configuration #namespace: ns-elastic data: logging_output: "kafka" kafka_brokers: "10.4.7.104:9092" kafka_version: "0.10.0" # configure all valid topics in kafka # when disable auto-create topic kafka_topics: "tomcat-syslog,tomcat-access" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: log-pilot2 #namespace: ns-elastic labels: k8s-app: log-pilot2 spec: selector: matchLabels: k8s-app: log-pilot2 updateStrategy: type: RollingUpdate template: metadata: labels: k8s-app: log-pilot2 spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: log-pilot2 # # wget https://github.com/AliyunContainerService/log-pilot/archive/v0.9.7.zip # unzip log-pilot-0.9.7.zip # vim ./log-pilot-0.9.7/assets/filebeat/config.filebeat # ... # output.kafka: # hosts: [$KAFKA_BROKERS] # topic: '%{[topic]}' # codec.format: # string: '%{[message]}' # ... image: registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.7-filebeat env: - name: "LOGGING_OUTPUT" valueFrom: configMapKeyRef: name: log-pilot2-configuration key: logging_output - name: "KAFKA_BROKERS" valueFrom: configMapKeyRef: name: log-pilot2-configuration key: kafka_brokers - name: "KAFKA_VERSION" valueFrom: configMapKeyRef: name: log-pilot2-configuration key: kafka_version - name: "NODE_NAME" valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: - name: sock mountPath: /var/run/docker.sock - name: logs mountPath: /var/log/filebeat - name: state mountPath: /var/lib/filebeat - name: root mountPath: /host readOnly: true - name: localtime mountPath: /etc/localtime # configure all valid topics in kafka # when disable auto-create topic - name: config-volume mountPath: /etc/filebeat/config securityContext: capabilities: add: - SYS_ADMIN terminationGracePeriodSeconds: 30 volumes: - name: sock hostPath: path: /var/run/docker.sock type: Socket - name: logs hostPath: path: /var/log/filebeat type: DirectoryOrCreate - name: state hostPath: path: /var/lib/filebeat type: DirectoryOrCreate - name: root hostPath: path: / type: Directory - name: localtime hostPath: path: /etc/localtime type: File # kubelet sync period - name: config-volume configMap: name: log-pilot2-configuration items: - key: kafka_topics path: kafka_topics ``` 再部署一個tomcat測試是否正常 ``` apiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat name: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: tolerations: - key: "node-role.kubernetes.io/master" effect: "NoSchedule" containers: - name: tomcat image: "tomcat:7.0" env: # 注意點一,添加相應的環境變量(下面收集了兩塊日志1、stdout 2、/usr/local/tomcat/logs/catalina.*.log) - name: aliyun_logs_tomcat-syslog # 如日志發送到es,那index名稱為 tomcat-syslog value: "stdout" - name: aliyun_logs_tomcat-access # 如日志發送到es,那index名稱為 tomcat-access value: "/usr/local/tomcat/logs/catalina.*.log" volumeMounts: # 注意點二,對pod內要收集的業務日志目錄需要進行共享,可以收集多個目錄下的日志文件 - name: tomcat-log mountPath: /usr/local/tomcat/logs volumes: - name: tomcat-log emptyDir: {} ``` 部署完成kafka查看命令: 查看kafka中生成的topics: /opt/kafka/bin/kafka-topics.sh --zookeeper 10.4.7.104:2181 --list 消費topics中的數據: /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 10.4.7.104:9092 --topic tomcat-access --from-beginning
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看