[TOC]
# 環境變量
**規劃端口**
| - | 端口范圍 |
| ------------------------------------ | ------------------------------------------------------------ |
| etcd數據庫 | 2379、2380、2381; |
| k8s組件端口 | 6443、10257、10257、10250、10249、10256 |
| k8s插件端口 | Calico: 179、9099; |
| k8s NodePort端口 | 30000 - 32767 |
| ip_local_port_range | 32768 - 65535 |
下面對上面的各端口類型進行解釋:
a) `etcd端口`:所需端口
b) `k8s組件端口`:基礎組件
c) `k8s插件端口`:calico端口、nginx-ingress-controller端口
d) `k8s NodePort端口`:跑在容器里面的應用,可以通過這個范圍內的端口向外暴露服務,所以應用的對外端口要在這個范圍內
e) `ip_local_port_range`:主機上一個進程訪問外部應用時,需要與外部應用建立TCP連接,TCP連接需要本機的一個端口,主機會從這個范圍內選擇一個沒有使用的端口建立TCP連接;
**添加hosts文件記錄**
```shell
cat >> /etc/hosts <<-EOF
{IP} {HOSTNAMW}
EOF
```
> 替換成實際的地址和域名
**關閉防火墻**
```shell
sudo systemctl disable firewalld --now
```
**關閉selinux**
```shell
#臨時生效
sudo setenforce 0
sed -ri 's/(SELINUX=).*/\1disabled/g' /etc/selinux/config
```
**關閉交換分區**
```shell
#臨時生效
swapoff -a
#永久生效,需要重啟
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
```
**加載ipvs模塊**
```shell
cat > /etc/sysconfig/modules/ipvs.modules <<-EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- br_netfilter
EOF
# 生效ipvs模塊
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
# 驗證
lsmod | grep -e ip_vs -e nf_conntrack -e br_netfilter
```
注意:在?`/etc/sysconfig/modules/`?目錄下的modules文件,重啟會自動加載。
**安裝ipset依賴包**
```shell
yum install ipvsadm conntrack-tools vim -y # 確保安裝ipset包
```
**優化內核參數**
```shell
cat > /etc/sysctl.d/kubernetes.conf << EOF
# 二層的網橋在轉發包時也會被iptables的FORWARD規則所過濾
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
# kernel轉發功能
net.ipv4.ip_forward = 1
# 允許將TIME-WAIT sockets重新用于新的TCP連接,默認為0,表示關閉
net.ipv4.tcp_tw_reuse = 1
# TCP連接中TIME-WAIT sockets的快速回收, 默認是0,表示關閉。對于位于NAT設備(容器轉發)后面的Client來說,就是一場災難
net.ipv4.tcp_tw_recycle = 0
# 允許系統打開的端口范圍,即用于向外連接的端口范圍
net.ipv4.ip_local_port_range = 32768 65535
# kernel中最多存在的TIME_WAIT數量, 默認是4096
net.ipv4.tcp_max_tw_buckets = 65535
# 控制系統是否開啟對數據包源地址的校驗(0 不校驗)
net.ipv4.conf.all.rp_filter = 0
# 開啟ipv6路由轉發
net.ipv6.conf.all.forwarding = 1
# 開啟ipv4路由轉發
net.ipv4.conf.all.forwarding = 1
# FIN-WAIT-2狀態保持時間
net.ipv4.tcp_fin_timeout = 15
EOF
# 生效 kubernetes.conf 文件
sysctl -p /etc/sysctl.d/kubernetes.conf
```
**設置時間同步**
```shell
# 安裝chrony包
yum install -y chrony
# 注釋原有的同步信息
sed -ri 's/(server .* iburst)/# \1/g' /etc/chrony.conf
# 添加ntp同步源
echo "server ntp.aliyun.com iburst" >> /etc/chrony.conf
# 重啟chronyd服務
systemctl restart chronyd
# 驗證服務
chronyc sources
```
# 安裝docker
**創建docker安裝目錄及環境變量**
```shell
mkdir -p /etc/docker/ /data/docker
```
**下載docker二進制包**
```shell
curl -SL -o /usr/local/src/docker-20.10.24.tgz https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz
```
**解壓二進制包**
```shell
tar xf /usr/local/src/docker-20.10.24.tgz -C /opt
cp /opt/docker/* /usr/local/bin/
rm -rf /opt/docker
```
**創建docker?的systemd?模板**
```shell
cat > /usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/local/bin/dockerd --config-file /etc/docker/daemon.json
ExecReload=/bin/kill -s HUP
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
```
**創建?daemon.json?文件**
```shell
cat > /etc/docker/daemon.json << EOF
{
"data-root": "/data/docker/",
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://1nj0zren.mirror.aliyuncs.com",
"https://docker.mirrors.ustc.edu.cn",
"http://f1361db2.m.daocloud.io",
"https://registry.docker-cn.com"
],
"log-driver": "json-file",
"log-level": "info"
}
}
EOF
```
**啟動docker**
```shell
systemctl daemon-reload
systemctl enable docker.service --now
```
**安裝docker-compose**
```shell
curl -L https://github.com/docker/compose/releases/download/v2.23.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose version
```
# 安裝etcd
**ca證書**
```shell
mkdir -p /etc/kubernetes/pki/etcd
cd /etc/kubernetes/pki/etcd && openssl genrsa -out ca.key 2048
cat <<-EOF | sudo tee ca-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
CN = etcd-ca
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = etcd-ca
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment, Certificate Sign
basicConstraints=CA:TRUE
subjectKeyIdentifier=hash
subjectAltName=@alt_names
EOF
openssl req -x509 -new -nodes -key ca.key -days 36500 -out ca.crt -config ca-csr.conf -extensions v3_ext
rm -rf /etc/kubernetes/pki/etcd/{ca.srl,ca-csr.conf,ca.csr}
```
**服務證書**
etcd服務之間的證書
```shell
ETCD_IP01=192.168.31.95
ETCD_IP02=192.168.31.78
ETCD_IP03=192.168.31.253
cd /etc/kubernetes/pki/etcd && openssl genrsa -out server.key 2048
cat <<-EOF | sudo tee server-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
IP.1 = 127.0.0.1
IP.2 = $ETCD_IP01
IP.3 = $ETCD_IP02
IP.4 = $ETCD_IP03
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=serverAuth,clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
subjectAltName=@alt_names
EOF
openssl req -new -key server.key -out server.csr -config server-csr.conf
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out server.crt -days 36500 \
-extensions v3_ext -extfile server-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/server.crt
rm -rf /etc/kubernetes/pki/etcd/{server-csr.conf,server.csr}
cd /etc/kubernetes/pki/etcd && openssl genrsa -out peer.key 2048
cat <<-EOF | sudo tee peer-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
IP.1 = 127.0.0.1
IP.2 = $ETCD_IP01
IP.3 = $ETCD_IP02
IP.4 = $ETCD_IP03
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=serverAuth,clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
subjectAltName=@alt_names
EOF
openssl req -new -key peer.key -out peer.csr -config peer-csr.conf
openssl x509 -req -in peer.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out peer.crt -days 36500 \
-extensions v3_ext -extfile peer-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/peer.crt
rm -rf /etc/kubernetes/pki/etcd/{peer-csr.conf,peer.csr}
```
**分發etcd相關證書到各個節點**
```shell
scp -r /etc/kubernetes/pki/etcd root@k8s-node01:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/etcd root@k8s-node02:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/etcd root@k8s-node03:/etc/kubernetes/pki/
```
**創建etcd數據目錄**
```shell
mkdir -p /var/lib/etcd
chmod 700 /var/lib/etcd
```
**下載etcd包**
```shell
curl -SL -o /usr/local/src/etcd-v3.5.6-linux-amd64.tar.gz https://mirrors.huaweicloud.com/etcd/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
tar xf /usr/local/src/etcd-v3.5.6-linux-amd64.tar.gz -C /opt
cp -r /opt/etcd-v3.5.6-linux-amd64/etcd* /usr/local/bin
rm -rf /opt/etcd-v3.5.6-linux-amd64
```
**分發etcd程序到各個etcd節點**
```shell
scp /usr/local/bin/etcd* root@k8s-node01:/usr/local/bin
scp /usr/local/bin/etcd* root@k8s-node02:/usr/local/bin
scp /usr/local/bin/etcd* root@k8s-node03:/usr/local/bin
```
**創建etcd?的systemd?模板**
```shell
netcar=`ip r | awk '/default via/ {print $5}'`
[ ! -z $netcar ] && LOCAL_IP=`ip r | awk -v netcar=$netcar '{if($3==netcar) print $9}'` || echo '$netcar is null'
ETCD_IP01=192.168.31.95
ETCD_IP02=192.168.31.78
ETCD_IP03=192.168.31.253
cat <<EOF | sudo tee /usr/lib/systemd/system/etcd.service > /dev/null
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://etcd.io/docs/v3.5/
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name=etcd01 \\
--data-dir=/var/lib/etcd \\
--snapshot-count=1000000 \\
--experimental-initial-corrupt-check=true \\
--experimental-watch-progress-notify-interval=5s \\
--advertise-client-urls=https://${LOCAL_IP}:2379 \\
--initial-advertise-peer-urls=https://${LOCAL_IP}:2380 \\
--initial-cluster=etcd01=https://${ETCD_IP01}:2380,etcd02=https://${ETCD_IP02}:2380,etcd03=https://${ETCD_IP03}:2380 \\
--initial-cluster-token=etcd-cluster \\
--initial-cluster-state=new \\
--client-cert-auth=true \\
--listen-client-urls=https://${LOCAL_IP}:2379 \\
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \\
--cert-file=/etc/kubernetes/pki/etcd/server.crt \\
--key-file=/etc/kubernetes/pki/etcd/server.key \\
--peer-client-cert-auth=true \\
--listen-peer-urls=https://${LOCAL_IP}:2380 \\
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \\
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \\
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
LimitNOFILE=65536
Restart=always
RestartSec=30
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
```
**分發etcd?的systemd?模板**
```shell
scp /usr/lib/systemd/system/etcd.service k8s-node01:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service k8s-node02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service k8s-node03:/usr/lib/systemd/system/
```
**啟動etcd**
```shell
systemctl daemon-reload
systemctl enable etcd.service --now
```
**驗證etcd**
```shell
etcd_cert_dir=/etc/kubernetes/pki/etcd
ETCDCTL_API=3 etcdctl --cacert=${etcd_cert_dir}/ca.crt --cert=${etcd_cert_dir}/server.crt --key=${etcd_cert_dir}/server.key --endpoints="https://192.168.31.95:2379,https://192.168.31.78:2379,https://192.168.31.253:2379" endpoint health -w table
```
說明:需要修改上面的?`IP地址`。
# 部署master節點
**下載kubernetes二進制包**
[kubernetes官方地址](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md),需要上外網。
```shell
curl -SL -o /usr/local/src/kubernetes-server-linux-amd64.tar.gz https://dl.k8s.io/v1.18.18/kubernetes-server-linux-amd64.tar.gz
```
說明:親測沒有外網可以下載。但是可以會出現超時,或者連接錯誤。可以重試幾次即可。
**解壓kubernetes的安裝包**
```shell
tar xf /usr/local/src/kubernetes-server-linux-amd64.tar.gz -C /opt/
```
## ca證書
kubernetes整個集群ca證書
```shell
cd /etc/kubernetes/pki && openssl genrsa -out ca.key 2048
cat <<-EOF | sudo tee ca-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
CN = kubernetes-ca
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment, Certificate Sign
basicConstraints=CA:TRUE
subjectKeyIdentifier=hash
subjectAltName=@alt_names
EOF
openssl req -x509 -new -nodes -key ca.key -days 36500 -out ca.crt -config ca-csr.conf -extensions v3_ext
rm -rf /etc/kubernetes/pki/{ca.srl,ca-csr.conf}
```
kubernetes集群前端代理客戶ca證書
```shell
cd /etc/kubernetes/pki && openssl genrsa -out front-proxy-ca.key 2048
cat <<-EOF | sudo tee front-proxy-ca-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
CN = front-proxy-ca
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = front-proxy-ca
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment, Certificate Sign
basicConstraints=CA:TRUE
subjectKeyIdentifier=hash
subjectAltName=@alt_names
EOF
openssl req -x509 -new -nodes -key front-proxy-ca.key -days 36500 -out front-proxy-ca.crt -config front-proxy-ca-csr.conf -extensions v3_ext
rm -rf /etc/kubernetes/pki/{front-proxy-ca-csr.conf,front-proxy-ca.srl}
```
## 安裝kube-apiserver
**創建目錄**
```shell
mkdir -p /etc/kubernetes/{conf,manifests,pki}
mkdir -p /var/log/kubernetes/kube-apiserver
```
**拷貝命令**
```shell
cp /opt/kubernetes/server/bin/{kube-apiserver,kubectl} /usr/local/bin/
```
**生成服務證書**
apiserver服務使用的證書
```shell
# apiserver證書
SERVICE_IP=10.96.0.1 # default命名空間下的kubernetesservice IP地址
MASTER_VIP=192.168.31.100 # master節點的VIP地址
MASTER_IP01=192.168.31.103
MASTER_IP02=192.168.31.79
cd /etc/kubernetes/pki && openssl genrsa -out apiserver.key 2048
cat <<-EOF | sudo tee apiserver-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
CN = kube-apiserver
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = $SERVICE_IP
IP.2 = $MASTER_IP01
IP.3 = $MASTER_IP02
IP.5 = $MASTER_VIP
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=serverAuth,clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
subjectAltName=@alt_names
EOF
openssl req -new -key apiserver.key -out apiserver.csr -config apiserver-csr.conf
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out apiserver.crt -days 36500 \
-extensions v3_ext -extfile apiserver-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/apiserver.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{apiserver.csr,apiserver-csr.conf}
```
kube-apiserver與etcd通訊的證書
```shell
cd /etc/kubernetes/pki && openssl genrsa -out apiserver-etcd-client.key 2048
cat <<-EOF | sudo tee apiserver-etcd-client-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
O = system:masters
CN = kube-apiserver-etcd-client
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key apiserver-etcd-client.key -out apiserver-etcd-client.csr -config apiserver-etcd-client-csr.conf
openssl x509 -req -in apiserver-etcd-client.csr -CA etcd/ca.crt -CAkey etcd/ca.key \
-CAcreateserial -out apiserver-etcd-client.crt -days 36500 \
-extensions v3_ext -extfile apiserver-etcd-client-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/apiserver-etcd-client.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{apiserver-etcd-client-csr.conf,apiserver-etcd-client.csr}
```
kube-apiserver與kubelet通訊的證書
```shell
cd /etc/kubernetes/pki && openssl genrsa -out apiserver-kubelet-client.key 2048
cat <<-EOF | sudo tee apiserver-kubelet-client-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
O = system:masters
CN = kube-apiserver-kubelet-client
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -config apiserver-kubelet-client-csr.conf
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out apiserver-kubelet-client.crt -days 36500 \
-extensions v3_ext -extfile apiserver-kubelet-client-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{apiserver-kubelet-client.csr,apiserver-kubelet-client-csr.conf}
```
前端代理客戶的證書
```shell
cd /etc/kubernetes/pki && openssl genrsa -out front-proxy-client.key 2048
cat <<-EOF | sudo tee front-proxy-client-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
CN = front-proxy-client
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key front-proxy-client.key -out front-proxy-client.csr -config front-proxy-client-csr.conf
openssl x509 -req -in front-proxy-client.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key \
-CAcreateserial -out front-proxy-client.crt -days 36500 \
-extensions v3_ext -extfile front-proxy-client-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki/front-proxy-client.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{front-proxy-client.csr,front-proxy-client-csr.conf}
```
serviceaccount證書
```shell
cd /etc/kubernetes/pki && openssl genrsa -out sa.key 2048
openssl rsa -in sa.key -pubout -out sa.pub
```
**創建審計策略配置文件**
```shell
cat > /etc/kubernetes/conf/kube-apiserver-audit.yml <<-EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
# 所有資源都記錄請求的元數據(請求的用戶、時間戳、資源、動詞等等), 但是不記錄請求或者響應的消息體。
- level: Metadata
EOF
```
**創建kube-apiserver的systemd模板**
```shell
cat <<'EOF' | sudo tee /usr/lib/systemd/system/kube-apiserver.service >> /dev/null
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--bind-address=0.0.0.0 \
--insecure-port=0 \
--secure-port=6443 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--client-ca-file=/etc/kubernetes/pki/ca.crt \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt \
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--etcd-servers=https://192.168.31.95:2379,https://192.168.31.78:2379,https://192.168.31.253:2379 \
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt \
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \
--requestheader-allowed-names=front-proxy-client \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--alsologtostderr=true --logtostderr=false --v=4 \
--log-dir=/var/log/kubernetes/kube-apiserver \
--audit-log-path=/var/log/kubernetes/kube-apiserver/apiserver.audit \
--audit-policy-file=/etc/kubernetes/conf/kube-apiserver-audit.yml \
--audit-log-maxsize=100 --audit-log-maxage=7
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
```
> 說明:需要修改?`etcd-servers(etcd 服務地址)`?和?`service-cluster-ip-range(service?IP段)`?。
**啟動kube-apiserver**
```shell
systemctl daemon-reload
systemctl enable kube-apiserver.service --now
```
**驗證**
```shell
curl -sk --cert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver.crt --key /etc/kubernetes/pki/apiserver.key https://localhost:6443/healthz && echo
```
## 安裝kube-controller-manager
**創建日志目錄**
```shell
mkdir /var/log/kubernetes/kube-controller-manager
```
**拷貝命令**
```shell
cp /opt/kubernetes/server/bin/kube-controller-manager /usr/local/bin/
```
**生成證書**
```shell
cd /etc/kubernetes/pki && openssl genrsa -out controller-manager.key 2048
cat <<-EOF | sudo tee controller-manager-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
CN = system:kube-controller-manager
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key controller-manager.key -out controller-manager.csr -config controller-manager-csr.conf
openssl x509 -req -in controller-manager.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out controller-manager.crt -days 36500 \
-extensions v3_ext -extfile controller-manager-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/controller-manager.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{controller-manager.csr,controller-manager-csr.conf}
```
**生成連接集群的kubeconfig文件**
```shell
KUBE_APISERVER="https://192.168.31.103:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.crt \
--client-key=/etc/kubernetes/pki/controller-manager.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/controller-manager.conf
```
**kube-controller-manager的systemd模板**
```shell
cat <<'EOF' | sudo tee /usr/lib/systemd/system/kube-controller-manager.service >> /dev/null
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--bind-address=0.0.0.0 \
--port=0 \
--secure-port=10257 \
--leader-elect=true \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--node-cidr-mask-size=24 \
--service-cluster-ip-range=10.96.0.0/12 \
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf \
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf \
--client-ca-file=/etc/kubernetes/pki/ca.crt \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key \
--root-ca-file=/etc/kubernetes/pki/ca.crt \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.conf \
--controllers=*,bootstrapsigner,tokencleaner \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \
--use-service-account-credentials=true \
--experimental-cluster-signing-duration=87600h0m0s \
--alsologtostderr=true --logtostderr=false --v=4 \
--log-dir=/var/log/kubernetes/kube-controller-manager
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
```
> 說明:需要修改?`service-cluster-ip-range(service?IP段)`??、?`cluster-cidr(pod?IP段)`?的值。
**啟動kube-controller-manager**
```shell
systemctl daemon-reload
systemctl enable kube-controller-manager.service --now
```
**驗證**
```shell
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/controller-manager.crt --key /etc/kubernetes/pki/controller-manager.key https://localhost:10257/healthz && echo
```
## 安裝kube-scheduler
**拷貝命令**
```shell
cp /opt/kubernetes/server/bin/kube-scheduler /usr/local/bin/
```
**創建日志目錄**
```shell
mkdir /var/log/kubernetes/kube-scheduler
```
**生成kube-scheduler證書**
```shell
cd /etc/kubernetes/pki && openssl genrsa -out scheduler.key 2048
cat <<-EOF | sudo tee scheduler-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
CN = system:kube-scheduler
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key scheduler.key -out scheduler.csr -config scheduler-csr.conf
openssl x509 -req -in scheduler.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out scheduler.crt -days 36500 \
-extensions v3_ext -extfile scheduler-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/scheduler.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{scheduler.csr,scheduler-csr.conf}
```
**生成連接集群的kubeconfig文件**
```shell
KUBE_APISERVER="https://192.168.31.103:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.crt \
--client-key=/etc/kubernetes/pki/scheduler.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/scheduler.conf
```
**創建kube-scheduler的systemd模板**
```shell
cat <<'EOF' | sudo tee /usr/lib/systemd/system/kube-scheduler.service >> /dev/null
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--bind-address=0.0.0.0 \
--port=0 \
--secure-port=10259 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.conf \
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf \
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf \
--alsologtostderr=true --logtostderr=false --v=4 \
--log-dir=/var/log/kubernetes/kube-scheduler
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
```
**啟動kube-scheduler**
```shell
systemctl daemon-reload
systemctl enable kube-scheduler.service --now
```
**驗證**
```shell
curl -sk --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/scheduler.crt --key /etc/kubernetes/pki/scheduler.key https://localhost:10259/healthz && echo
```
## 客戶端設置及驗證
**客戶端設置**
```shell
cd /etc/kubernetes/pki && openssl genrsa -out admin.key 2048
cat <<-EOF | sudo tee admin-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
O = system:masters
CN = kubernetes-admin
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key admin.key -out admin.csr -config admin-csr.conf
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out admin.crt -days 36500 \
-extensions v3_ext -extfile admin-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/admin.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/admin{.csr,-csr.conf}
KUBE_APISERVER="https://192.168.31.103:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-credentials system:admin \
--client-certificate=/etc/kubernetes/pki/admin.crt \
--client-key=/etc/kubernetes/pki/admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:admin \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/admin.conf
cp /etc/kubernetes/admin.conf ~/.kube/config
```
# 部署工作節點
## Bootstrap Tokens認證
1. 生成bootstrap token
```shell
token_id=`cat /dev/urandom | head -c 10 | md5sum | head -c 6`
token_secret=`cat /dev/urandom | head -c 10 | md5sum | head -c 16`
echo $token_id | grep [^0-9] > /dev/null || echo -e '\n\033[31m【警告】\033[0m$token_id 是純數字,請重新生成token_id。'
cat <<-EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-${token_id} # 格式:bootstrap-token-[TOKENID]
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
token-id: ${token_id} # 格式:[a-z0-9]{6}
token-secret: ${token_secret} # 格式:[a-z0-9]{16}
expiration: `date -d '1 day' +%F`T`date +%T`+08:00 # 過期時間,當該bootstrap token過期則刪除,默認有效期為一天
usage-bootstrap-authentication: "true" # 表示令牌可以作為持有者令牌用于 API 服務器的身份認證
usage-bootstrap-signing: "true" # 該令牌可用于如下所述地對 cluster-info 的 ConfigMap 進行簽名
auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
EOF
```
> **注意**:需要修改過期時間。
2. 授予bootstrap token創建CSR證書簽名請求的權限,即授予kubelet創建CSR證書簽名請求的權限
```shell
cat <<-EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
```
3. 授予bootstrap token權限,讓kube-controller-manager可以自動審批其發起的CSR
```shell
cat <<-EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
```
4. 授予kubelet權限,讓kube-controller-manager自動批復kubelet的證書輪換請求
```shell
cat <<-EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
```
## 安裝kubelet
**創建日志目錄**
```shell
mkdir /var/log/kubernetes/kubelet
mkdir /var/lib/kubelet
```
**拷貝命令**
```shell
cp /opt/kubernetes/server/bin/kubelet /usr/local/bin/
```
**生成連接集群的kubeconfig文件**
```shell
KUBE_APISERVER="https://192.168.31.103:6443"
if [ `kubectl -n kube-system get secret --field-selector type=bootstrap.kubernetes.io/token -o name | wc -l` -ge 1 ];then
token_id=`kubectl -n kube-system get secret --field-selector type=bootstrap.kubernetes.io/token -ojsonpath='{.items[0].data.token-id}' | base64 -d`
token_secret=`kubectl -n kube-system get secret --field-selector type=bootstrap.kubernetes.io/token -ojsonpath='{.items[0].data.token-secret}' | base64 -d`
TOKEN="${token_id}.${token_secret}"
echo ${TOKEN}
else
echo -e "\n\033[31m【警告】\033[0m沒有Bootstrap Tokens認證,請重新生成bootstrap token..."
fi
kubectl config set-cluster bootstrap \
--server=$KUBE_APISERVER \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-credentials kubelet-bootstrap \
--token=$TOKEN \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-context bootstrap --user=kubelet-bootstrap \
--cluster=bootstrap --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config use-context bootstrap \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
```
**創建kubelet啟動參數**
```shell
netcar=`ip r | awk '/default via/ {print $5}'`
[ ! -z $netcar ] && ipaddr=`ip r | awk -v netcar=$netcar '{if($3==netcar) print $9}'` || echo '$netcar is null'
cat > /etc/kubernetes/conf/kubelet.conf <<EOF
KUBELET_KUBECONFIG_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
KUBELET_CONFIG_ARGS="--config=/var/lib/kubelet/config.yaml"
KUBELET_NETWORK_ARGS="--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
KUBELET_DATA_ARGS="--root-dir=/var/lib/kubelet --cert-dir=/var/lib/kubelet/pki --rotate-certificates"
KUBELET_LOG_ARGS="--alsologtostderr=true --logtostderr=false --v=4 --log-dir=/var/log/kubernetes/kubelet"
KUBELET_EXTRA_ARGS="--hostname-override=$ipaddr --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2"
EOF
```
> 說明:修改?`hostname-override`?為當前的?IP地址?。?`cni-conf-dir`?默認是?/etc/cni/net.d,`cni-bin-dir`?默認是/opt/cni/bin。`root-dir` 默認是/var/lib/kubelet目錄
**創建kubelet配置參數文件**
```shell
cat > /var/lib/kubelet/config.yaml <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 0
cgroupDriver: systemd
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
anthorization:
mode: Webhook
Webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
clusterDomain: cluster.local
healthzBindAddress: 127.0.0.1
healthzPort: 10248
rotateCertificates: true
staticPodPath: /etc/kubernetes/manifests
maxOpenFiles: 1000000
maxPods: 100
clusterDNS:
- 10.96.0.10
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
EOF
```
> 說明:需要修改?`clusterDNS`?的IP地址為?`server?IP段`?。
> 參考地址:?https://github.com/kubernetes/kubelet
> https://kubernetes.io/zh/docs/reference/config-api/kubelet-config.v1beta1/
> https://pkg.go.dev/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
**創建kubelet的systemd模板**
```shell
cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/conf/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DATA_ARGS \$KUBELET_LOG_ARGS \$KUBELET_EXTRA_ARGS
Restart=on-failure
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
```
**啟動kubelet**
```shell
systemctl daemon-reload
systemctl enable kubelet.service --now
```
**驗證**
```shell
curl http://localhost:10248/healthz && echo
kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.31.103 NotReady <none> 27s v1.18.18
# 將節點標記為master節點
kubectl label node 192.168.31.103 node-role.kubernetes.io/master=""
# 將role為master節點,設置不可調度
kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master="":NoSchedule --overwrite
```
**刪除bootstrap證書**
```shell
rm -rf /etc/kubernetes/bootstrap-kubelet.conf
```
## 安裝kube-proxy
**創建日志目錄**
```shell
mkdir /var/log/kubernetes/kube-proxy
```
**拷貝命令**
```shell
cp /opt/kubernetes/server/bin/kube-proxy /usr/local/bin/
```
**生成kube-proxy證書**
```shell
cd /etc/kubernetes/pki && openssl genrsa -out proxy.key 2048
cat <<-EOF | sudo tee proxy-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
CN = system:kube-proxy
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key proxy.key -out proxy.csr -config proxy-csr.conf
openssl x509 -req -in proxy.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out proxy.crt -days 36500 \
-extensions v3_ext -extfile proxy-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/proxy.crt
[ $? -eq 0 ] && rm -rf /etc/kubernetes/pki/{proxy.csr,proxy-csr.conf}
```
**生成連接集群的kubeconfig文件**
```shell
KUBE_APISERVER="https://192.168.31.103:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/etc/kubernetes/proxy.conf
kubectl config set-credentials system:kube-proxy \
--client-certificate=/etc/kubernetes/pki/proxy.crt \
--client-key=/etc/kubernetes/pki/proxy.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/proxy.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:kube-proxy \
--kubeconfig=/etc/kubernetes/proxy.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/proxy.conf
```
**創建配置參數文件**
```shell
netcar=`ip r | awk '/default via/ {print $5}'`
[ ! -z $netcar ] && ipaddr=`ip r | awk -v netcar=$netcar '{if($3==netcar) print $9}'` || echo '$netcar is null'
cat > /etc/kubernetes/conf/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
kubeconfig: /etc/kubernetes/proxy.conf
hostnameOverride: $ipaddr
clusterCIDR: 10.244.0.0/16
mode: ipvs
ipvs:
minSyncPeriod: 5s
syncPeriod: 5s
scheduler: "rr"
EOF
```
> 說明:修改?`hostnameOverride`?的值為IP地址。`clusterCIDR`?的值為pod?IP段。
> 參考地址:?https://github.com/kubernetes/kube-proxy
> https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
> https://kubernetes.io/zh/docs/reference/config-api/kube-proxy-config.v1alpha1/
**創建kube-proxy的systemd模板**
```shell
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/conf/kube-proxy-config.yml \\
--alsologtostderr=true --logtostderr=false --v=4 \\
--log-dir=/var/log/kubernetes/kube-proxy
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
```
**啟動kube-proxy**
```shell
systemctl daemon-reload
systemctl enable kube-proxy.service --now
```
**驗證**
```shell
curl http://localhost:10249/healthz && echo
```
# 部署插件
## 安裝calico
詳細的參數信息,請查看 [calico官網](https://docs.projectcalico.org/about/about-calico)
1. 下載manifest文件
```shell
mkdir /etc/kubernetes/addons
curl https://docs.projectcalico.org/archive/v3.18/manifests/calico-etcd.yaml -o /etc/kubernetes/addons/calico.yaml
```
2. 生成calico證書
```shell
pki_dir=/etc/kubernetes/pki/etcd
cd ${pki_dir} && openssl genrsa -out calico-etcd-client.key 2048
cat <<-EOF | sudo tee calico-etcd-client-csr.conf > /dev/null
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
C = CN
ST = Guangdong
L = Guangzhou
CN = calico-etcd-client
[ v3_ext ]
keyUsage=Digital Signature, Key Encipherment
extendedKeyUsage=clientAuth
basicConstraints=CA:FALSE
authorityKeyIdentifier=keyid:always
EOF
openssl req -new -key calico-etcd-client.key -out calico-etcd-client.csr -config calico-etcd-client-csr.conf
openssl x509 -req -in calico-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial -out calico-etcd-client.crt -days 36500 \
-extensions v3_ext -extfile calico-etcd-client-csr.conf -sha256
openssl verify -CAfile /etc/kubernetes/pki/etcd/ca.crt ${pki_dir}/calico-etcd-client.crt
[ $? -eq 0 ] && rm -rf ${pki_dir}/{calico-etcd-client-csr.conf,calico-etcd-client.csr}
```
3. 修改calico連接etcd的地址
>[info] 修改etcd地址
```bash
sed -ri 's@http://<ETCD_IP>:<ETCD_PORT>@https://192.168.32.127:2379,https://192.168.32.128:2379,https://192.168.32.129:2379@g' /etc/kubernetes/addons/calico.yaml
```
4. 修改calico連接etcd證書
```bash
ETCD_CA=$(cat /etc/kubernetes/pki/etcd/ca.crt | base64 -w 0)
ETCD_CERT=$(cat /etc/kubernetes/pki/etcd/calico-etcd-client.crt | base64 -w 0)
ETCD_KEY=$(cat /etc/kubernetes/pki/etcd/calico-etcd-client.key | base64 -w 0)
sed -ri "s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" /etc/kubernetes/addons/calico.yaml
sed -ri "s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g" /etc/kubernetes/addons/calico.yaml
sed -ri "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g" /etc/kubernetes/addons/calico.yaml
sed -ri 's@etcd_ca: ""@etcd_ca: "/calico-secrets/etcd-ca"@g' /etc/kubernetes/addons/calico.yaml
sed -ri 's@etcd_cert: ""@etcd_cert: "/calico-secrets/etcd-cert"@g' /etc/kubernetes/addons/calico.yaml
sed -ri 's@etcd_key: ""@etcd_key: "/calico-secrets/etcd-key"@g' /etc/kubernetes/addons/calico.yaml
```
5. 設置calico網段地址
將 DaemonSet 類型,calico-node 的 `spec.template.spec.containers.env` 下添加一段下面的內容
>[info] 默認是192.168.0.0/16地址
```yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
```
6. 設置calico使用網卡
將 DaemonSet 類型,calico-node 的 `spec.template.spec.containers.env` 下添加一段下面的內容
```yaml
- name: IP_AUTODETECTION_METHOD
value: "interface=eth.*|em.*|enp.*"
```
7. calico開啟metrics功能
將 DaemonSet 類型,calico-node 的 `spec.template.spec.containers.env` 下添加一段下面的內容
```yaml
- name: FELIX_PROMETHEUSMETRICSENABLED
value: "True"
- name: FELIX_PROMETHEUSMETRICSPORT
value: "9091"
```
將 DaemonSet 類型,calico-node 的 `spec.template.spec.containers` 下添加一段下面的內容
```yaml
ports:
- containerPort: 9091
name: http-metrics
protocol: TCP
```
**部署calico**
```shell
kubectl apply -f /etc/kubernetes/addons/calico.yaml
```
**驗證calico**
```shell
kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-f4c6dbf-tkq77 1/1 Running 1 42h
calico-node-c4ccj 1/1 Running 1 42h
calico-node-crs9k 1/1 Running 1 42h
calico-node-fm697 1/1 Running 1 42h
kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.31.103 Ready master 5d23h v1.18.18
192.168.31.253 Ready <none> 5d23h v1.18.18
192.168.31.78 Ready <none> 5d23h v1.18.18
192.168.31.95 Ready <none> 5d23h v1.18.18
**注意**:status不是為ready的話,稍等一段時間再看看。一直都沒有變成ready,請檢查 kubelet 配置文件是否設置cni-bin-dir參數。默認是 `/opt/cni/bin`、`/etc/cni/net.d/`
kubectl run busybox --image=jiaxzeng/busybox:1.24.1 sleep 3600
kubectl run nginx --image=nginx
kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 6 42h 20.0.58.194 192.168.31.78 <none> <none>
nginx 1/1 Running 1 42h 20.0.85.194 192.168.31.95 <none> <none>
kubectl exec busybox -- ping 20.0.85.194 -c4
PING 20.0.85.194 (20.0.85.194): 56 data bytes
64 bytes from 20.0.85.194: seq=0 ttl=62 time=0.820 ms
64 bytes from 20.0.85.194: seq=1 ttl=62 time=0.825 ms
64 bytes from 20.0.85.194: seq=2 ttl=62 time=0.886 ms
64 bytes from 20.0.85.194: seq=3 ttl=62 time=0.840 ms
--- 20.0.85.194 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.820/0.842/0.886 ms
```
**除ping不通跨節點容器外,其他都沒有問題的話**
可能是IP隧道的原因。可以手動測試一下兩臺主機IP隧道是否可以通信。
```sehll
modprobe ipip
ip tunnel add ipip-tunnel mode ipip remote 對端外網IP local 本機外網IP
ifconfig ipip-tunnel 虛IP netmask 255.255.255.0
```
如上述不通,請核查主機IP隧道通信問題。如果是openstack創建的虛機出現這種情況,可以禁用安全端口功能。
```shell
openstack server show 主機名稱
openstack server remove security group 主機名稱 安全組名稱
openstack port set --disable-port-security `openstack port list | grep '主機IP地址' | awk '{print $2}'`
```
**安裝calicoctl客戶端**
```shell
curl -L https://github.com/projectcalico/calicoctl/releases/download/v3.18.6/calicoctl -o /usr/local/bin/calicoctl
chmod +x /usr/local/bin/calicoctl
```
**配置calicoctl**
```shell
mkdir -p /etc/calico
cat <<EOF | sudo tee /etc/calico/calicoctl.cfg > /dev/null
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
etcdEndpoints: https://192.168.31.95:2379,https://192.168.31.78:2379,https://192.168.31.253:2379
etcdKeyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
etcdCertFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
etcdCACertFile: /etc/kubernetes/pki/etcd/ca.crt
EOF
```
**驗證**
```shell
$ calicoctl get node -owide
```
## 部署coreDNS
**下載coredns部署yaml文件**
```shell
mkdir ~/coredns && cd ~/coredns
# https://github.com/kubernetes/kubernetes/blob/v1.18.18/cluster/addons/dns/coredns/coredns.yaml.sed
cat <<-EOF | sudo tee ~/coredns/coredns.yaml > /dev/null
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes \$DNS_DOMAIN in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
image: k8s.gcr.io/coredns:1.6.5
imagePullPolicy: IfNotPresent
resources:
limits:
memory: \$DNS_MEMORY_LIMIT
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: \$DNS_SERVER_IP
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
EOF
```
**修改參數**
```shell
vim coredns.yaml
...
kubernetes $DNS_DOMAIN in-addr.arpa ip6.arpa {
...
memory: $DNS_MEMORY_LIMIT
...
clusterIP: $DNS_SERVER_IP
...
image: k8s.gcr.io/coredns:1.6.5
# 添加 pod 反親和,在 deploy.spec.template.spec 添加以下內容
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
k8s-app: kube-dns
```
- 將?`$DNS_DOMAIN`?替換成?`cluster.local.`?。默認?DNS_DOMAIN?就是?cluster.local.?。
- 將?`$DNS_MEMORY_LIMIT`?替換成合適的資源。
- 將?`$DNS_SERVER_IP`?替換成和?kubelet-config.yaml 的 `clusterDNS` 字段保持一致
- 如果不能上外網的話,將?image?的鏡像設置為?`coredns/coredns:x.x.x`。
- 生產環境只有一個副本數不合適,所以在?`deployment`控制器的?`spec`?字段下,添加一行?`replicas:?3`?參數。
**部署coredns**
```shell
kubectl apply -f coredns.yaml
```
**驗證**
```shell
kubectl get pod -n kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-75d9bd4f59-df94b 1/1 Running 0 7m55s
coredns-75d9bd4f59-kh4rp 1/1 Running 0 7m55s
coredns-75d9bd4f59-vjkpb 1/1 Running 0 7m55s
kubectl run dig --rm -it --image=jiaxzeng/dig:latest /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local.
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
/ # nslookup kube-dns.kube-system.svc.cluster.local.
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kube-dns.kube-system.svc.cluster.local
Address: 10.96.0.10
```
# k8s命令補全
```shell
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
```
# 附加iptables規則
```shell
# ssh 服務
iptables -t filter -A INPUT -p icmp --icmp-type 8 -j ACCEPT
iptables -t filter -A INPUT -p tcp --dport 22 -m comment --comment "sshd service" -j ACCEPT
iptables -t filter -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t filter -A INPUT -i lo -j ACCEPT
iptables -t filter -P INPUT DROP
# etcd數據庫
iptables -t filter -I INPUT -p tcp --dport 2379:2381 -m comment --comment "etcd Component ports" -j ACCEPT
# matster服務
iptables -t filter -I INPUT -p tcp -m multiport --dport 6443,10257,10259 -m comment --comment "k8s master Component ports" -j ACCEPT
# node服務
iptables -t filter -I INPUT -p tcp -m multiport --dport 10249,10250,10256 -m comment --comment "k8s node Component ports" -j ACCEPT
# k8s使用到的端口
iptables -t filter -I INPUT -p tcp --dport 32768:65535 -m comment --comment "ip_local_port_range ports" -j ACCEPT
iptables -t filter -I INPUT -p tcp --dport 30000:32767 -m comment --comment "k8s service nodeports" -j ACCEPT
# calico服務端口
iptables -t filter -I INPUT -p tcp -m multiport --dport 179,9099 -m comment --comment "k8s calico Component ports" -j ACCEPT
iptables -t filter -I INPUT -p tcp --dport 9091 -m comment --comment "k8s calico metrics ports" -j ACCEPT
# coredns服務端口
iptables -t filter -I INPUT -p udp -m udp --dport 53 -m comment --comment "k8s coredns ports" -j ACCEPT
# pod 到 service 網絡。沒有設置的話,啟動coredns失敗。
iptables -t filter -I INPUT -p tcp -s 20.0.0.0/16 -d 10.183.0.0/24 -m comment --comment "pod to service" -j ACCEPT
# 記錄別drop的數據包,日志在 /var/log/messages,過濾關鍵字"iptables-drop: "
iptables -t filter -A INPUT -j LOG --log-prefix='iptables-drop: '
```
- 前言
- 架構
- 部署
- kubeadm部署
- kubeadm擴容節點
- 二進制安裝基礎組件
- 添加master節點
- 添加工作節點
- 選裝插件安裝
- Kubernetes使用
- k8s與dockerfile啟動參數
- hostPort與hostNetwork異同
- 應用上下線最佳實踐
- 進入容器命名空間
- 主機與pod之間拷貝
- events排序問題
- k8s會話保持
- 容器root特權
- CNI插件
- calico
- calicoctl安裝
- calico網絡通信
- calico更改pod地址范圍
- 新增節點網卡名不一致
- 修改calico模式
- calico數據存儲遷移
- 啟用 kubectl 來管理 Calico
- calico卸載
- cilium
- cilium架構
- cilium/hubble安裝
- cilium網絡路由
- IP地址管理(IPAM)
- Cilium替換KubeProxy
- NodePort運行DSR模式
- IP地址偽裝
- ingress使用
- nginx-ingress
- ingress安裝
- ingress高可用
- helm方式安裝
- 基本使用
- Rewrite配置
- tls安全路由
- ingress發布管理
- 代理k8s集群外的web應用
- ingress自定義日志
- ingress記錄真實IP地址
- 自定義參數
- traefik-ingress
- traefik名詞概念
- traefik安裝
- traefik初次使用
- traefik路由(IngressRoute)
- traefik中間件(middlewares)
- traefik記錄真實IP地址
- cert-manager
- 安裝教程
- 頒布者CA
- 創建證書
- 外部存儲
- 對接NFS
- 對接ceph-rbd
- 對接cephfs
- 監控平臺
- Prometheus
- Prometheus安裝
- grafana安裝
- Prometheus配置文件
- node_exporter安裝
- kube-state-metrics安裝
- Prometheus黑盒監控
- Prometheus告警
- grafana儀表盤設置
- 常用監控配置文件
- thanos
- Prometheus
- Sidecar組件
- Store Gateway組件
- Querier組件
- Compactor組件
- Prometheus監控項
- grafana
- Querier對接grafana
- alertmanager
- Prometheus對接alertmanager
- 日志中心
- filebeat安裝
- kafka安裝
- logstash安裝
- elasticsearch安裝
- elasticsearch索引生命周期管理
- kibana安裝
- event事件收集
- 資源預留
- 節點資源預留
- imagefs與nodefs驗證
- 資源預留 vs 驅逐 vs OOM
- scheduler調度原理
- Helm
- Helm安裝
- Helm基本使用
- 安全
- apiserver審計日志
- RBAC鑒權
- namespace資源限制
- 加密Secret數據
- 服務網格
- 備份恢復
- Velero安裝
- 備份與恢復
- 常用維護操作
- container runtime
- 拉取私有倉庫鏡像配置
- 拉取公網鏡像加速配置
- runtime網絡代理
- overlay2目錄占用過大
- 更改Docker的數據目錄
- Harbor
- 重置Harbor密碼
- 問題處理
- 關閉或開啟Harbor的認證
- 固定harbor的IP地址范圍
- ETCD
- ETCD擴縮容
- ETCD常用命令
- ETCD數據空間壓縮清理
- ingress
- ingress-nginx header配置
- kubernetes
- 驗證yaml合法性
- 切換KubeProxy模式
- 容器解析域名
- 刪除節點
- 修改鏡像倉庫
- 修改node名稱
- 升級k8s集群
- 切換容器運行時
- apiserver接口
- 其他
- 升級內核
- k8s組件性能分析
- ETCD
- calico
- calico健康檢查失敗
- Harbor
- harbor同步失敗
- Kubernetes
- 資源Terminating狀態
- 啟動容器報錯