[TOC]
Filebeat 是一個用于轉發和集中日志數據的輕量級傳送器。作為代理安裝在您的服務器上,Filebeat 監控您指定的日志文件或位置,收集日志事件,并將它們轉發到Elasticsearch或 Logstash以進行索引。
以下是 Filebeat 的工作原理:當您啟動 Filebeat 時,它會啟動一個或多個輸入,這些輸入會在您為日志數據指定的位置中查找。對于 Filebeat 定位的每個日志,Filebeat 都會啟動一個收割機。每個harvester 讀取單個日志以獲取新內容并將新日志數據發送到libbeat,libbeat 聚合事件并將聚合數據發送到您為Filebeat 配置的輸出。

# 本地運行
## 下載安裝包
```shell
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.2-linux-x86_64.tar.gz
tar xf filebeat-7.17.2-linux-x86_64.tar.gz -C /opt/
```
## 配置項
### 輸出到elasticsearch
```shell
[elk@elk01 ~]$ sudo cat /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml
# --------------- input ------------------------------
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
fields:
name: messages
- type: log
enabled: true
paths:
- /data/k8s/logs/kubernetes.audit
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
fields:
name: k8sAudit
# --------------- processors ------------------------------
processors:
- add_tags:
target: "environment"
tags: ["kubernetes", "production"]
# --------------- output ------------------------------
output.elasticsearch:
hosts: ["192.168.31.29:9200", "192.168.31.193:9200", "192.168.31.120:9200"]
indices:
- index: "messages-%{+yyyy.MM}"
when.equals:
fields.name: "messages"
- index: "k8s-audit-%{+yyyy.MM}"
when.equals:
fields.name: "k8sAudit"
# --------------- setup ------------------------------
setup.ilm.enabled: false
setup.dashboards.enabled: false
```
### 數據到logstash
```shell
[kafka@elk02 ~]$ sudo egrep -v '^ {,5}#|^$' /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
output.logstash:
hosts: ["10.0.0.129:5044"]
```
### 數據到kafka
```shell
[kafka@elk02 ~]$ sudo egrep -v "^$|^ {,5}#" /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml
fields: {log_topic: "elk"}
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
output.kafka:
hosts: ["10.0.0.127:9092", "10.0.0.128:9092", "10.0.0.129:9092"]
topic: '%{[fields.log_topic]}'
partition.round_robin:
reachable_only: true
required_acks: 1
compression: gzip
max_message_bytes: 1000000
```
### 熱加載配置
```shell
# 熱加載input數據源和自帶模塊
# 修改主配置源需要重啟才生效
[kafka@elk02 filebeat-7.17.2-linux-x86_64]$ sudo egrep -v "^$|^ {,5}#" /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml
filebeat.config.inputs:
enabled: true
path: configs/*.yml
reload.enabled: true
reload.period: 10s
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
output.kafka:
enabled: true
hosts: ["10.0.0.127:9092", "10.0.0.128:9092", "10.0.0.129:9092"]
topic: 'logstash'
partition.round_robin:
reachable_only: true
required_acks: 1
compression: gzip
max_message_bytes: 1000000
[kafka@elk02 filebeat-7.17.2-linux-x86_64]$ cat configs/nginx.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
[kafka@elk02 filebeat-7.17.2-linux-x86_64]$ chmod 644 configs/nginx.yml
[kafka@elk02 filebeat-7.17.2-linux-x86_64]$ sudo chown root configs/nginx.yml
```
## 修改權限
```shell
sudo chown -R elk.elk /opt/filebeat-7.17.2-linux-x86_64
sudo chown root /opt/filebeat-7.17.2-linux-x86_64/filebeat.yml
```
## 創建目錄
```shell
mkdir /opt/filebeat-7.17.2-linux-x86_64/{logs,pid}
```
## 啟動服務
```shell
cd /opt/filebeat-7.17.2-linux-x86_64/
nohup sudo ./filebeat -e &>> logs/filebeat-server-`date "+%Y%m%d"`.log & echo $! > pid/filebeat.pid
```
## 停止服務
```shell
cat /opt/filebeat-7.17.2-linux-x86_64/pid/filebeat.pid | xargs -I {} sudo kill {}
```
# 容器運行
運行 `filebeat` 收集日志權限,保存為 `rbac.yml` 文件
```yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
namespace: kube-system
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: Role
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat-kubeadm-config
namespace: kube-system
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: Role
name: filebeat-kubeadm-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
- apiGroups: ["apps"]
resources:
- replicasets
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
# should be the namespace where filebeat is running
namespace: kube-system
labels:
k8s-app: filebeat
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat-kubeadm-config
namespace: kube-system
labels:
k8s-app: filebeat
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- kubeadm-config
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
```
每臺主機都需要 `filebeat` 容器來日志,保存為 `daemonset.yml` 文件
```yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.17.2
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: mainconfig
mountPath: /usr/share/filebeat/configs
- name: config
mountPath: /usr/share/filebeat/configs
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: mainconfig
configMap:
defaultMode: 0640
name: filebeat-main-config
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
```
運行 `filebeat` 的配置文件,該文件作為主配置文件,后續修改非主配置的輸入源無需重啟filebeat。保存為 `config-kafka-main.yml` 文件
```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-main-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config.inputs:
enabled: true
path: configs/*.yml
reload.enabled: true
reload.period: 10s
output.kafka:
hosts: ["192.168.31.235:9092", "192.168.31.202:9092", "192.168.31.140:9092"]
topics:
- topic: 'messages'
when.equals:
fields.type: messages
- topic: 'k8s-audit'
when.equals:
fields.type: k8s-audit
partition.round_robin:
reachable_only: true
required_acks: 1
compression: gzip
max_message_bytes: 1000000
```
實際定義收集日志的路徑,保存為 `config-kafka.yml` 文件
```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
log.yml: |-
- type: log
enabled: true
fields:
type: messages
paths:
- /var/log/messages
- type: log
enabled: true
fields:
type: k8s-audit
paths:
- /data/k8s/logs/kube-apiserver/kubernetes.audit
```
啟動filebeat服務
```shell
kubectl apply -f rbac.yml
kubectl apply -f config-kafka-main.yml
kubectl apply -f config-kafka.yml
kubectl apply -f daemonset.yml
```
## 參考文章
filebeat官方文檔:https://www.elastic.co/guide/en/beats/filebeat/7.17/index.html
- 前言
- 架構
- 部署
- kubeadm部署
- kubeadm擴容節點
- 二進制安裝基礎組件
- 添加master節點
- 添加工作節點
- 選裝插件安裝
- Kubernetes使用
- k8s與dockerfile啟動參數
- hostPort與hostNetwork異同
- 應用上下線最佳實踐
- 進入容器命名空間
- 主機與pod之間拷貝
- events排序問題
- k8s會話保持
- 容器root特權
- CNI插件
- calico
- calicoctl安裝
- calico網絡通信
- calico更改pod地址范圍
- 新增節點網卡名不一致
- 修改calico模式
- calico數據存儲遷移
- 啟用 kubectl 來管理 Calico
- calico卸載
- cilium
- cilium架構
- cilium/hubble安裝
- cilium網絡路由
- IP地址管理(IPAM)
- Cilium替換KubeProxy
- NodePort運行DSR模式
- IP地址偽裝
- ingress使用
- nginx-ingress
- ingress安裝
- ingress高可用
- helm方式安裝
- 基本使用
- Rewrite配置
- tls安全路由
- ingress發布管理
- 代理k8s集群外的web應用
- ingress自定義日志
- ingress記錄真實IP地址
- 自定義參數
- traefik-ingress
- traefik名詞概念
- traefik安裝
- traefik初次使用
- traefik路由(IngressRoute)
- traefik中間件(middlewares)
- traefik記錄真實IP地址
- cert-manager
- 安裝教程
- 頒布者CA
- 創建證書
- 外部存儲
- 對接NFS
- 對接ceph-rbd
- 對接cephfs
- 監控平臺
- Prometheus
- Prometheus安裝
- grafana安裝
- Prometheus配置文件
- node_exporter安裝
- kube-state-metrics安裝
- Prometheus黑盒監控
- Prometheus告警
- grafana儀表盤設置
- 常用監控配置文件
- thanos
- Prometheus
- Sidecar組件
- Store Gateway組件
- Querier組件
- Compactor組件
- Prometheus監控項
- grafana
- Querier對接grafana
- alertmanager
- Prometheus對接alertmanager
- 日志中心
- filebeat安裝
- kafka安裝
- logstash安裝
- elasticsearch安裝
- elasticsearch索引生命周期管理
- kibana安裝
- event事件收集
- 資源預留
- 節點資源預留
- imagefs與nodefs驗證
- 資源預留 vs 驅逐 vs OOM
- scheduler調度原理
- Helm
- Helm安裝
- Helm基本使用
- 安全
- apiserver審計日志
- RBAC鑒權
- namespace資源限制
- 加密Secret數據
- 服務網格
- 備份恢復
- Velero安裝
- 備份與恢復
- 常用維護操作
- container runtime
- 拉取私有倉庫鏡像配置
- 拉取公網鏡像加速配置
- runtime網絡代理
- overlay2目錄占用過大
- 更改Docker的數據目錄
- Harbor
- 重置Harbor密碼
- 問題處理
- 關閉或開啟Harbor的認證
- 固定harbor的IP地址范圍
- ETCD
- ETCD擴縮容
- ETCD常用命令
- ETCD數據空間壓縮清理
- ingress
- ingress-nginx header配置
- kubernetes
- 驗證yaml合法性
- 切換KubeProxy模式
- 容器解析域名
- 刪除節點
- 修改鏡像倉庫
- 修改node名稱
- 升級k8s集群
- 切換容器運行時
- apiserver接口
- 其他
- 升級內核
- k8s組件性能分析
- ETCD
- calico
- calico健康檢查失敗
- Harbor
- harbor同步失敗
- Kubernetes
- 資源Terminating狀態
- 啟動容器報錯