<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                [TOC] Filebeat 是一個用于轉發和集中日志數據的輕量級傳送器。作為代理安裝在您的服務器上,Filebeat 監控您指定的日志文件或位置,收集日志事件,并將它們轉發到Elasticsearch或 Logstash以進行索引。 以下是 Filebeat 的工作原理:當您啟動 Filebeat 時,它會啟動一個或多個輸入,這些輸入會在您為日志數據指定的位置中查找。對于 Filebeat 定位的每個日志,Filebeat 都會啟動一個收割機。每個harvester 讀取單個日志以獲取新內容并將新日志數據發送到libbeat,libbeat 聚合事件并將聚合數據發送到您為Filebeat 配置的輸出。 ![](https://img.kancloud.cn/5a/67/5a675caf5d7a64dc6f10e83f1c241934_705x584.png) # 本地運行 ## 下載安裝包 ```shell curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.2-linux-x86_64.tar.gz tar xf filebeat-7.17.2-linux-x86_64.tar.gz -C /opt/ ``` ## 配置項 ### 輸出到elasticsearch ```shell [elk@elk01 ~]$ sudo cat /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml # --------------- input ------------------------------ filebeat.inputs: - type: log enabled: true paths: - /var/log/messages fields: name: messages - type: log enabled: true paths: - /data/k8s/logs/kubernetes.audit json.keys_under_root: true json.add_error_key: true json.message_key: log fields: name: k8sAudit # --------------- processors ------------------------------ processors: - add_tags: target: "environment" tags: ["kubernetes", "production"] # --------------- output ------------------------------ output.elasticsearch: hosts: ["192.168.31.29:9200", "192.168.31.193:9200", "192.168.31.120:9200"] indices: - index: "messages-%{+yyyy.MM}" when.equals: fields.name: "messages" - index: "k8s-audit-%{+yyyy.MM}" when.equals: fields.name: "k8sAudit" # --------------- setup ------------------------------ setup.ilm.enabled: false setup.dashboards.enabled: false ``` ### 數據到logstash ```shell [kafka@elk02 ~]$ sudo egrep -v '^ {,5}#|^$' /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /var/log/messages output.logstash: hosts: ["10.0.0.129:5044"] ``` ### 數據到kafka ```shell [kafka@elk02 ~]$ sudo egrep -v "^$|^ {,5}#" /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml fields: {log_topic: "elk"} filebeat.inputs: - type: log enabled: true paths: - /var/log/messages output.kafka: hosts: ["10.0.0.127:9092", "10.0.0.128:9092", "10.0.0.129:9092"] topic: '%{[fields.log_topic]}' partition.round_robin: reachable_only: true required_acks: 1 compression: gzip max_message_bytes: 1000000 ``` ### 熱加載配置 ```shell # 熱加載input數據源和自帶模塊 # 修改主配置源需要重啟才生效 [kafka@elk02 filebeat-7.17.2-linux-x86_64]$ sudo egrep -v "^$|^ {,5}#" /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml filebeat.config.inputs: enabled: true path: configs/*.yml reload.enabled: true reload.period: 10s filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: true output.kafka: enabled: true hosts: ["10.0.0.127:9092", "10.0.0.128:9092", "10.0.0.129:9092"] topic: 'logstash' partition.round_robin: reachable_only: true required_acks: 1 compression: gzip max_message_bytes: 1000000 [kafka@elk02 filebeat-7.17.2-linux-x86_64]$ cat configs/nginx.yml filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log [kafka@elk02 filebeat-7.17.2-linux-x86_64]$ chmod 644 configs/nginx.yml [kafka@elk02 filebeat-7.17.2-linux-x86_64]$ sudo chown root configs/nginx.yml ``` ## 修改權限 ```shell sudo chown -R elk.elk /opt/filebeat-7.17.2-linux-x86_64 sudo chown root /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml ``` ## 創建目錄 ```shell mkdir /opt/filebeat-7.17.2-linux-x86_64/{logs,pid} ``` ## 啟動服務 ```shell cd /opt/filebeat-7.17.2-linux-x86_64/ nohup sudo ./filebeat -e &>> logs/filebeat-server-`date "+%Y%m%d"`.log & echo $! > pid/filebeat.pid ``` ## 停止服務 ```shell cat /opt/filebeat-7.17.2-linux-x86_64/pid/filebeat.pid | xargs -I {} sudo kill {} ``` # 容器運行 運行 `filebeat` 收集日志權限,保存為 `rbac.yml` 文件 ```yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat-kubeadm-config namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat-kubeadm-config apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: ["apps"] resources: - replicasets verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat # should be the namespace where filebeat is running namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: ["get", "create", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat-kubeadm-config namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: [""] resources: - configmaps resourceNames: - kubeadm-config verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat ``` 每臺主機都需要 `filebeat` 容器來日志,保存為 `daemonset.yml` 文件 ```yaml --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:7.17.2 args: [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: mainconfig mountPath: /usr/share/filebeat/configs - name: config mountPath: /usr/share/filebeat/configs - name: data mountPath: /usr/share/filebeat/data - name: varlog mountPath: /var/log readOnly: true volumes: - name: mainconfig configMap: defaultMode: 0640 name: filebeat-main-config - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate ``` 運行 `filebeat` 的配置文件,該文件作為主配置文件,后續修改非主配置的輸入源無需重啟filebeat。保存為 `config-kafka-main.yml` 文件 ```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-main-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.config.inputs: enabled: true path: configs/*.yml reload.enabled: true reload.period: 10s output.kafka: hosts: ["192.168.31.235:9092", "192.168.31.202:9092", "192.168.31.140:9092"] topics: - topic: 'messages' when.equals: fields.type: messages - topic: 'k8s-audit' when.equals: fields.type: k8s-audit partition.round_robin: reachable_only: true required_acks: 1 compression: gzip max_message_bytes: 1000000 ``` 實際定義收集日志的路徑,保存為 `config-kafka.yml` 文件 ```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: log.yml: |- - type: log enabled: true fields: type: messages paths: - /var/log/messages - type: log enabled: true fields: type: k8s-audit paths: - /data/k8s/logs/kube-apiserver/kubernetes.audit ``` 啟動filebeat服務 ```shell kubectl apply -f rbac.yml kubectl apply -f config-kafka-main.yml kubectl apply -f config-kafka.yml kubectl apply -f daemonset.yml ``` ## 參考文章 filebeat官方文檔:https://www.elastic.co/guide/en/beats/filebeat/7.17/index.html
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看