[TOC]
# 負載均衡服務器
非集群節點上安裝以下的服務
## 下載docker-compose
```shell
curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
## 安裝nginx
**創建目錄**
```shell
mkdir -p /etc/ngxin/{conf.d,stream}
```
**nginx主配置**
```shell
cat <<-"EOF" | sudo tee /etc/nginx/nginx.conf > /dev/null
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
stream {
log_format proxy '$remote_addr $remote_port - [$time_local] $status $protocol '
'"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"';
include /etc/nginx/stream/*.conf;
}
EOF
```
**四層代理apiserver服務**
```shell
cat <<-"EOF" | sudo tee /etc/nginx/stream/apiserver.conf > /dev/null
upstream apiserver {
server 192.168.31.103:6443 max_fails=3 fail_timeout=5s;
server 192.168.31.79:6443 max_fails=3 fail_timeout=5s;
server {
listen 6443;
# proxy_protocol on;
proxy_pass apiserver;
access_log /var/log/nginx/apiserver_tcp_access.log proxy;
error_log /var/log/nginx/apiserver_tcp_error.log;
}
EOF
```
> 注意:修改server替換成實際的 master節點 IP地址
**docker-compose配置**
```shell
$ cat <<-EOF | sudo tee /etc/nginx/docker-compose.yaml > /dev/null
version: "3"
services:
nginx:
container_name: nginx
image: nginx:1.21-alpine
volumes:
- "./stream:/etc/nginx/stream:ro"
- "./conf.d:/etc/nginx/conf.d:ro"
- "./nginx.conf:/etc/nginx/nginx.conf:ro"
- "./logs:/var/log/nginx"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro"
restart: always
network_mode: "host"
EOF
```
**啟動nginx**
```shell
docker-compose -f /etc/nginx/docker-compose.yaml up -d
```
## 安裝keepalived
**配置keepalived**
```shell
$ mkdir /etc/keepalived
$ cat <<-EOF | sudo tee /etc/keepalived/keepalived.conf > /dev/null
! Configuration File for keepalived
global_defs {
max_auto_priority -1
enable_script_security
vrrp_skip_check_adv_addr
}
include /etc/keepalived/keepalived_apiserver.conf
EOF
$ cat <<-EOF | sudo tee /etc/keepalived/keepalived_apiserver.conf > /dev/null
vrrp_script apiserver {
# 檢測腳本路徑
script "/etc/keepalived/chk_apiserver.sh"
# 執行檢測腳本的內置用戶
user keepalived
# 腳本調用之間的秒數
interval 1
# 轉換失敗所需的次數
fall 5
# 轉換成功所需的次數
rise 3
# 按此權重調整優先級
weight -50
}
# 如果多個 vrrp_instance,切記名稱不可以重復。包含上面的 include 其他子路徑
vrrp_instance apiserver {
# 狀態是主節點還是從節點
state MASTER
# inside_network 的接口,由 vrrp 綁定。
interface eth0
# 虛擬路由id,根據該id進行組成主從架構
virtual_router_id 100
# 初始優先級
# 最后優先級權重計算方法
# (1) weight 為正數,priority - weight
# (2) weight 為負數,priority + weight
priority 200
# 加入集群的認證
authentication {
auth_type PASS
auth_pass pwd100
}
# keepalivd配置成單播模式
## 單播的源地址
unicast_src_ip 192.168.31.103
## 單播的對端地址
unicast_peer {
192.168.31.79
}
# vip 地址
virtual_ipaddress {
192.168.31.100
}
# 健康檢查腳本
track_script {
apiserver
}
}
EOF
```
**keepalived檢測腳本**
```shell
$ cat <<-EOF | sudo tee /etc/keepalived/chk_apiserver.sh > /dev/null
#!/bin/sh
count=\$(netstat -lntup | egrep ':6443' | wc -l)
if [ "\$count" -ge 1 ];then
# 退出狀態為0,代表檢查成功
exit 0
else
# 退出狀態為1,代表檢查不成功
exit 1
fi
EOF
$ chmod +x /etc/keepalived/chk_apiserver.sh
```
**docker-compose文件**
```shell
$ cat <<-EOF | sudo tee /etc/keepalived/docker-compose.yaml > /dev/null
version: "3"
services:
keepalived:
container_name: keepalived
image: jiaxzeng/keepalived:2.2.7-alpine3.16
volumes:
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro"
- ".:/etc/keepalived"
cap_add:
- NET_ADMIN
network_mode: "host"
restart: always
EOF
```
**啟動keepalived**
```shell
$ docker-compose -f /etc/keepalived/docker-compose.yaml up -d
```
# 安裝master相關服務
## 安裝kube-apiserver
**創建相關目錄**
```shell
mkdir -p /etc/kubernetes/conf
mkdir -p /var/log/kubernetes/kube-apiserver
```
**獲取相關證書**
```shell
scp -r k8s-master01:/etc/kubernetes/pki /etc/kubernetes/
```
**驗證apiserver證書是否可用**
```shell
MASTER_VIP=192.168.31.100
netcar=`ip r | awk '/default via/ {print $5}'`
[ ! -z $netcar ] && MASTER02_IP=`ip r | awk -v netcar=$netcar '{if($3==netcar) print $9}'` || echo '$netcar is null'
openssl x509 -noout -in /etc/kubernetes/pki/apiserver.crt -checkip $MASTER02_IP | grep NOT
openssl x509 -noout -in /etc/kubernetes/pki/apiserver.crt -checkip $MASTER_VIP | grep NOT
```
> **注意**:如果沒有任何輸出,則apiserver證書可以使用。輸出 `does NOT match certificate` 字眼,則apiserver證書需要重新生成,參考[《二進制安裝基礎組件》](./install_binaries_kubernetes.md)文章中 安裝kube-apiserver 的 生成服務證書(apiserver服務使用的證書) 重新制作證書
**拷貝命令**
```shell
scp k8s-master01:/usr/local/bin/{kube-apiserver,kubectl} /usr/local/bin
```
**獲取審計配置文件**
```shell
scp k8s-master01:/etc/kubernetes/conf/kube-apiserver-audit.yml /etc/kubernetes/conf/
```
**創建kube-apiserver的systemd模板**
```shell
scp k8s-master01:/usr/lib/systemd/system/kube-apiserver.service /usr/lib/systemd/system/
```
**啟動kube-apiserver**
```shell
systemctl daemon-reload
systemctl enable kube-apiserver.service --now
```
**驗證**
```shell
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver.crt --key /etc/kubernetes/pki/apiserver.key https://localhost:6443/healthz && echo
```
## 安裝kube-controller-manager
**創建日志目錄**
```shell
mkdir /var/log/kubernetes/kube-controller-manager
```
**拷貝命令**
```shell
scp k8s-master01:/usr/local/bin/kube-controller-manager /usr/local/bin
```
**生成連接集群的kubeconfig文件**
```shell
scp k8s-master01:/etc/kubernetes/controller-manager.conf /etc/kubernetes/
MASTER_VIP=192.168.31.100
PORT=6443
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/controller-manager.conf
```
**kube-controller-manager的systemd模板**
```shell
scp k8s-master01:/usr/lib/systemd/system/kube-controller-manager.service /usr/lib/systemd/system
```
**啟動kube-controller-manager**
```shell
systemctl daemon-reload
systemctl enable kube-controller-manager.service --now
```
**驗證**
```shell
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/controller-manager.crt --key /etc/kubernetes/pki/controller-manager.key https://localhost:10257/healthz && echo
```
## 安裝kube-scheduler
**創建日志目錄**
```shell
mkdir /var/log/kubernetes/kube-scheduler
```
**拷貝命令**
```shell
scp k8s-master01:/usr/local/bin/kube-scheduler /usr/local/bin
```
**生成連接集群的kubeconfig文件**
```shell
scp k8s-master01:/etc/kubernetes/scheduler.conf /etc/kubernetes/
MASTER_VIP=192.168.31.100
PORT=6443
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/scheduler.conf
```
**創建kube-scheduler的systemd模板**
```shell
scp k8s-master01:/usr/lib/systemd/system/kube-scheduler.service /usr/lib/systemd/system
```
**啟動kube-scheduler**
```shell
systemctl daemon-reload
systemctl enable kube-scheduler.service --now
```
**驗證**
```shell
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/scheduler.crt --key /etc/kubernetes/pki/scheduler.key https://localhost:10259/healthz && echo
```
## 獲取客戶端設置
```shell
scp k8s-master01:/etc/kubernetes/admin.conf /etc/kubernetes/
MASTER_VIP=192.168.31.100
PORT=6443
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/admin.conf
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config
```
# 安裝node相關服務
參考 [<添加工作節點>](./k8s_add_work_node.md) 文章
# 設置master節點不可調度
```shell
# 將節點標記為master節點
kubectl label node 192.168.31.79 node-role.kubernetes.io/master=""
# 將role為master節點,設置不可調度
kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master="":NoSchedule --overwrite
```
# 修改原有節點的配置文件
## 相關master服務配置
**修改配置文件**
```shell
MASTER_VIP=192.168.31.100
PORT=6443
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/controller-manager.conf
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/scheduler.conf
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/admin.conf
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' ~/.kube/config
```
**重啟master服務**
```shell
systemctl restart kube-apiserver kube-controller-manager kube-scheduler
```
**驗證**
```shell
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver.crt --key /etc/kubernetes/pki/apiserver.key https://localhost:6443/healthz && echo
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/controller-manager.crt --key /etc/kubernetes/pki/controller-manager.key https://localhost:10257/healthz && echo
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/scheduler.crt --key /etc/kubernetes/pki/scheduler.key https://localhost:10259/healthz && echo
```
## 相關node服務
**修改配置文件**
```shell
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/kubelet.conf
sed -ri 's@(server).*@\1: https://'"${MASTER_VIP}:${PORT}"'@g' /etc/kubernetes/proxy.conf
```
**重啟node服務**
```shell
systemctl restart kubelet kube-proxy
```
**驗證**
```shell
# kubelet
curl http://localhost:10248/healthz && echo
# kube-proxy
curl http://localhost:10249/healthz && echo
```
# 附加iptables規則
```shell
# haproxy
iptables -t filter -I INPUT -p tcp --dport 6443 -m comment --comment "k8s vip ports" -j ACCEPT
# keepalived心跳,如果keepalvied是單播模式可以不需要該規則
iptables -t filter -I INPUT -p vrrp -s 192.168.31.0/24 -d 224.0.0.18 -m comment --comment "keepalived Heartbeat" -j ACCEPT
```
- 前言
- 架構
- 部署
- kubeadm部署
- kubeadm擴容節點
- 二進制安裝基礎組件
- 添加master節點
- 添加工作節點
- 選裝插件安裝
- Kubernetes使用
- k8s與dockerfile啟動參數
- hostPort與hostNetwork異同
- 應用上下線最佳實踐
- 進入容器命名空間
- 主機與pod之間拷貝
- events排序問題
- k8s會話保持
- 容器root特權
- CNI插件
- calico
- calicoctl安裝
- calico網絡通信
- calico更改pod地址范圍
- 新增節點網卡名不一致
- 修改calico模式
- calico數據存儲遷移
- 啟用 kubectl 來管理 Calico
- calico卸載
- cilium
- cilium架構
- cilium/hubble安裝
- cilium網絡路由
- IP地址管理(IPAM)
- Cilium替換KubeProxy
- NodePort運行DSR模式
- IP地址偽裝
- ingress使用
- nginx-ingress
- ingress安裝
- ingress高可用
- helm方式安裝
- 基本使用
- Rewrite配置
- tls安全路由
- ingress發布管理
- 代理k8s集群外的web應用
- ingress自定義日志
- ingress記錄真實IP地址
- 自定義參數
- traefik-ingress
- traefik名詞概念
- traefik安裝
- traefik初次使用
- traefik路由(IngressRoute)
- traefik中間件(middlewares)
- traefik記錄真實IP地址
- cert-manager
- 安裝教程
- 頒布者CA
- 創建證書
- 外部存儲
- 對接NFS
- 對接ceph-rbd
- 對接cephfs
- 監控平臺
- Prometheus
- Prometheus安裝
- grafana安裝
- Prometheus配置文件
- node_exporter安裝
- kube-state-metrics安裝
- Prometheus黑盒監控
- Prometheus告警
- grafana儀表盤設置
- 常用監控配置文件
- thanos
- Prometheus
- Sidecar組件
- Store Gateway組件
- Querier組件
- Compactor組件
- Prometheus監控項
- grafana
- Querier對接grafana
- alertmanager
- Prometheus對接alertmanager
- 日志中心
- filebeat安裝
- kafka安裝
- logstash安裝
- elasticsearch安裝
- elasticsearch索引生命周期管理
- kibana安裝
- event事件收集
- 資源預留
- 節點資源預留
- imagefs與nodefs驗證
- 資源預留 vs 驅逐 vs OOM
- scheduler調度原理
- Helm
- Helm安裝
- Helm基本使用
- 安全
- apiserver審計日志
- RBAC鑒權
- namespace資源限制
- 加密Secret數據
- 服務網格
- 備份恢復
- Velero安裝
- 備份與恢復
- 常用維護操作
- container runtime
- 拉取私有倉庫鏡像配置
- 拉取公網鏡像加速配置
- runtime網絡代理
- overlay2目錄占用過大
- 更改Docker的數據目錄
- Harbor
- 重置Harbor密碼
- 問題處理
- 關閉或開啟Harbor的認證
- 固定harbor的IP地址范圍
- ETCD
- ETCD擴縮容
- ETCD常用命令
- ETCD數據空間壓縮清理
- ingress
- ingress-nginx header配置
- kubernetes
- 驗證yaml合法性
- 切換KubeProxy模式
- 容器解析域名
- 刪除節點
- 修改鏡像倉庫
- 修改node名稱
- 升級k8s集群
- 切換容器運行時
- apiserver接口
- 其他
- 升級內核
- k8s組件性能分析
- ETCD
- calico
- calico健康檢查失敗
- Harbor
- harbor同步失敗
- Kubernetes
- 資源Terminating狀態
- 啟動容器報錯