[TOC]
## 版本說明
| CSI 版本 | kubernetes 版本 |
| :------: | :------------: |
| 3.5.1 | 1.18.18 |
> 詳細的對應的版本,請查看下面的參考文章
## ceph側執行
1. ceph創建 `cephfs-metadata, cephfs-data` 存儲池
```shell
$ ceph osd pool create cephfs-metadata 64 64
pool 'cephfs-metadata' created
$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
```
2. 創建 cephFS
```shell
$ ceph fs new cephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8
```
3. 獲取ceph相關信息
```shell
$ ceph mon dump
epoch 2
fsid b87d2535-406b-442d-8de2-49d86f7dc599
last_changed 2022-06-15T17:35:37.096336+0800
created 2022-06-15T17:35:05.828763+0800
min_mon_release 15 (octopus)
0: [v2:192.168.31.69:3300/0,v1:192.168.31.69:6789/0] mon.ceph01
1: [v2:192.168.31.102:3300/0,v1:192.168.31.102:6789/0] mon.ceph02
2: [v2:192.168.31.165:3300/0,v1:192.168.31.165:6789/0] mon.ceph03
dumped monmap epoch 2
```
## k8s部署ceph-csi
1. 創建訪問ceph密鑰
```shell
$ cat << EOF | sudo tee 0.csi-cephfs-secret.yml >> /dev/null
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: kube-storage
stringData:
# Required for statically provisioned volumes
# 通過 ceph auth get-key client.admin 獲取 userkey,無需base64
userID: admin
userKey: AQDmp6lihkf5FxAA809mwZ32rx6tKiihamkh0g=
# Required for dynamically provisioned volumes
# 通過 ceph auth get-key client.admin 獲取 userkey,無需base64
adminID: admin
adminKey: AQDmp6lihkf5FxAA809mwZ32rx6tKiihamkh0g=
EOF
$ kubectl apply -f 0.csi-cephfs-secret.yml
secret/csi-cephfs-secret created
```
2. 創建相關rbac權限
```shell
$ cat << EOF | sudo tee 1.csi-provisioner-rbac.yml >> /dev/null
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-csi-provisioner
namespace: kube-storage
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-csi-provisioner-role
subjects:
- kind: ServiceAccount
name: cephfs-csi-provisioner
namespace: kube-storage
roleRef:
kind: ClusterRole
name: cephfs-external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# replace with non-default namespace name
namespace: kube-storage
name: cephfs-external-provisioner-cfg
rules:
# remove this once we stop supporting v1.0.0
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-csi-provisioner-role-cfg
# replace with non-default namespace name
namespace: kube-storage
subjects:
- kind: ServiceAccount
name: cephfs-csi-provisioner
# replace with non-default namespace name
namespace: kube-storage
roleRef:
kind: Role
name: cephfs-external-provisioner-cfg
apiGroup: rbac.authorization.k8s.io
EOF
$ cat << EOF | sudo tee 2.csi-nodeplugin-rbac.yml >> /dev/null
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-csi-nodeplugin
namespace: kube-storage
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-csi-nodeplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-csi-nodeplugin
subjects:
- kind: ServiceAccount
name: cephfs-csi-nodeplugin
namespace: kube-storage
roleRef:
kind: ClusterRole
name: cephfs-csi-nodeplugin
apiGroup: rbac.authorization.k8s.io
EOF
$ kubectl apply -f 1.csi-provisioner-rbac.yml
serviceaccount/cephfs-csi-provisioner created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
$ kubectl apply -f 2.csi-nodeplugin-rbac.yml
serviceaccount/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
```
3. 創建ceph配置文件
```shell
$ cat << EOF | sudo tee 3.csi-config-map.yml >> /dev/null
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "b87d2535-406b-442d-8de2-49d86f7dc599",
"monitors": [
"192.168.31.69:6789",
"192.168.31.102:6789",
"192.168.31.165:6789"
]
}
]
metadata:
name: ceph-csi-config
namespace: kube-storage
EOF
$ kubectl apply -f 3.csi-config-map.yml
configmap/ceph-csi-config created
$ kubectl -n kube-storage create configmap ceph-config --from-file=/etc/ceph/ceph.conf --from-file=keyring=/etc/ceph/ceph.client.kubernetes.keyring
configmap/ceph-config created
```
> 根據ceph側執行的返回結果來填寫內容。
4. 創建 ceph-csi 配置器
```shell
$ cat << EOF | sudo tee 4.csi-cephfsplugin-provisioner.yml >> /dev/null
---
kind: Service
apiVersion: v1
metadata:
name: csi-cephfsplugin-provisioner
namespace: kube-storage
labels:
app: csi-metrics
spec:
selector:
app: csi-cephfsplugin-provisioner
ports:
- name: http-metrics
port: 8080
protocol: TCP
targetPort: 8681
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-cephfsplugin-provisioner
namespace: kube-storage
spec:
selector:
matchLabels:
app: csi-cephfsplugin-provisioner
replicas: 3
template:
metadata:
labels:
app: csi-cephfsplugin-provisioner
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- csi-cephfsplugin-provisioner
topologyKey: "kubernetes.io/hostname"
serviceAccountName: cephfs-csi-provisioner
priorityClassName: system-cluster-critical
containers:
- name: csi-provisioner
image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
args:
- "--csi-address=\$(ADDRESS)"
- "--v=5"
- "--timeout=150s"
- "--leader-election=true"
- "--retry-interval-start=500ms"
- "--feature-gates=Topology=false"
- "--extra-create-metadata=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-resizer
image: k8s.gcr.io/sig-storage/csi-resizer:v1.3.0
args:
- "--csi-address=\$(ADDRESS)"
- "--v=5"
- "--timeout=150s"
- "--leader-election"
- "--retry-interval-start=500ms"
- "--handle-volume-inuse-error=false"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-snapshotter
image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0
args:
- "--csi-address=\$(ADDRESS)"
- "--v=5"
- "--timeout=150s"
- "--leader-election=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-cephfsplugin-attacher
image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
args:
- "--v=5"
- "--csi-address=\$(ADDRESS)"
- "--leader-election=true"
- "--retry-interval-start=500ms"
env:
- name: ADDRESS
value: /csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-cephfsplugin
# for stable functionality replace canary with latest release version
image: quay.io/cephcsi/cephcsi:v3.5.1
args:
- "--nodeid=\$(NODE_ID)"
- "--type=cephfs"
- "--controllerserver=true"
- "--endpoint=\$(CSI_ENDPOINT)"
- "--v=5"
- "--drivername=cephfs.csi.ceph.com"
- "--pidlimit=-1"
- "--enableprofiling=false"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: host-sys
mountPath: /sys
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: host-dev
mountPath: /dev
- name: ceph-config
mountPath: /etc/ceph/
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: liveness-prometheus
image: quay.io/cephcsi/cephcsi:v3.5.1
args:
- "--type=liveness"
- "--endpoint=\$(CSI_ENDPOINT)"
- "--metricsport=8681"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
imagePullPolicy: "IfNotPresent"
volumes:
- name: socket-dir
emptyDir: {
medium: "Memory"
}
- name: host-sys
hostPath:
path: /sys
- name: lib-modules
hostPath:
path: /lib/modules
- name: host-dev
hostPath:
path: /dev
- name: ceph-config
configMap:
name: ceph-config
- name: ceph-csi-config
configMap:
name: ceph-csi-config
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
EOF
$ kubectl apply -f 4.csi-cephfsplugin-provisioner.yml
service/csi-cephfsplugin-provisioner created
deployment.apps/csi-cephfsplugin-provisioner created
$ kubectl -n kube-storage get pod -l app=csi-cephfsplugin-provisioner
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-provisioner-b4fff46dc-dzcm6 6/6 Running 0 2m58s
csi-cephfsplugin-provisioner-b4fff46dc-qp67j 6/6 Running 0 2m58s
csi-cephfsplugin-provisioner-b4fff46dc-r7cs9 6/6 Running 0 2m58s
```
5. 創建 ceph-csi 節點器
```shell
$ cat << EOF | sudo tee 5.csi-cephfsplugin.yml >> /dev/null
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-cephfsplugin
namespace: kube-storage
spec:
selector:
matchLabels:
app: csi-cephfsplugin
template:
metadata:
labels:
app: csi-cephfsplugin
spec:
serviceAccountName: cephfs-csi-nodeplugin
priorityClassName: system-node-critical
hostNetwork: true
# to use e.g. Rook orchestrated cluster, and mons' FQDN is
# resolved through k8s service, set dns policy to cluster first
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: driver-registrar
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
securityContext:
privileged: true
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.4.0
args:
- "--v=5"
- "--csi-address=/csi/csi.sock"
- "--kubelet-registration-path=/var/lib/kubelet/plugins/cephfs.csi.ceph.com/csi.sock"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: csi-cephfsplugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
# for stable functionality replace canary with latest release version
image: quay.io/cephcsi/cephcsi:v3.5.1
args:
- "--nodeid=\$(NODE_ID)"
- "--type=cephfs"
- "--nodeserver=true"
- "--endpoint=\$(CSI_ENDPOINT)"
- "--v=5"
- "--drivername=cephfs.csi.ceph.com"
- "--enableprofiling=false"
# If topology based provisioning is desired, configure required
# node labels representing the nodes topology domain
# and pass the label names below, for CSI to consume and advertise
# its equivalent topology domain
# - "--domainlabels=failure-domain/region,failure-domain/zone"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
- name: plugin-dir
mountPath: /var/lib/kubelet/plugins
mountPropagation: "Bidirectional"
- name: host-sys
mountPath: /sys
- name: etc-selinux
mountPath: /etc/selinux
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: host-dev
mountPath: /dev
- name: host-mount
mountPath: /run/mount
- name: ceph-config
mountPath: /etc/ceph/
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: liveness-prometheus
securityContext:
privileged: true
image: quay.io/cephcsi/cephcsi:v3.5.1
args:
- "--type=liveness"
- "--endpoint=\$(CSI_ENDPOINT)"
- "--metricsport=8681"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
imagePullPolicy: "IfNotPresent"
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/cephfs.csi.ceph.com/
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins
type: Directory
- name: host-sys
hostPath:
path: /sys
- name: etc-selinux
hostPath:
path: /etc/selinux
- name: lib-modules
hostPath:
path: /lib/modules
- name: host-dev
hostPath:
path: /dev
- name: host-mount
hostPath:
path: /run/mount
- name: ceph-config
configMap:
name: ceph-config
- name: ceph-csi-config
configMap:
name: ceph-csi-config
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:
name: csi-metrics-cephfsplugin
namespace: kube-storage
labels:
app: csi-metrics
spec:
ports:
- name: http-metrics
port: 8080
protocol: TCP
targetPort: 8681
selector:
app: csi-cephfsplugin
EOF
$ kubectl apply -f 5.csi-cephfsplugin.yml
daemonset.apps/csi-cephfsplugin created
service/csi-metrics-cephfsplugin created
$ kubectl -n kube-storage get pod -l app=csi-cephfsplugin
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-9z8jl 3/3 Running 0 57s
csi-cephfsplugin-jtgwt 3/3 Running 0 57s
csi-cephfsplugin-twzpw 3/3 Running 0 57s
csi-cephfsplugin-xxm6x 3/3 Running 0 57s
csi-cephfsplugin-zjjh5 3/3 Running 0 57s
```
如果kubelet數據目錄有做修改的話,請修改相關的配置。
例如,kubelet數據目錄在`/data/k8s/data/kubelet`目錄下。那么請執行 `sed -ri 's#/var/lib/kubelet#/data/k8s/data/kubelet#g' 7.csi-rbdplugin.yml` 來修改配置文件
6. 創建SC動態供應
```shell
$ cat << EOF | sudo tee 6.storageclass.yml >> /dev/null
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: b87d2535-406b-442d-8de2-49d86f7dc599
fsName: cephfs
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-storage
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: kube-storage
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
EOF
$ kubectl apply -f 6.storageclass.yml
storageclass.storage.k8s.io/csi-cephfs-sc created
```
> 注意修改 `clusterID` 字段內容
## 驗證
創建一個1Gb的pvc
```shell
$ cat << EOF | sudo tee 7.cephfs-pvc.yaml >> /dev/null
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: csi-cephfs-sc
EOF
$ kubectl apply -f 7.cephfs-pvc.yaml
persistentvolumeclaim/csi-cephfs-pvc created
$ kubectl get pvc csi-cephfs-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-a603e082-2bef-4696-9a22-222be55d0d46 1Gi RWX csi-cephfs-sc 46s
```
## 附件內容
鏈接:https://pan.baidu.com/s/1wXSMC8-yoJfCKjcTLR-AbQ
提取碼:gcuj
## 參考文章
ceph-csi:https://github.com/ceph/ceph-csi/tree/v3.5.1/deploy/cephfs/kubernetes
- 前言
- 架構
- 部署
- kubeadm部署
- kubeadm擴容節點
- 二進制安裝基礎組件
- 添加master節點
- 添加工作節點
- 選裝插件安裝
- Kubernetes使用
- k8s與dockerfile啟動參數
- hostPort與hostNetwork異同
- 應用上下線最佳實踐
- 進入容器命名空間
- 主機與pod之間拷貝
- events排序問題
- k8s會話保持
- 容器root特權
- CNI插件
- calico
- calicoctl安裝
- calico網絡通信
- calico更改pod地址范圍
- 新增節點網卡名不一致
- 修改calico模式
- calico數據存儲遷移
- 啟用 kubectl 來管理 Calico
- calico卸載
- cilium
- cilium架構
- cilium/hubble安裝
- cilium網絡路由
- IP地址管理(IPAM)
- Cilium替換KubeProxy
- NodePort運行DSR模式
- IP地址偽裝
- ingress使用
- nginx-ingress
- ingress安裝
- ingress高可用
- helm方式安裝
- 基本使用
- Rewrite配置
- tls安全路由
- ingress發布管理
- 代理k8s集群外的web應用
- ingress自定義日志
- ingress記錄真實IP地址
- 自定義參數
- traefik-ingress
- traefik名詞概念
- traefik安裝
- traefik初次使用
- traefik路由(IngressRoute)
- traefik中間件(middlewares)
- traefik記錄真實IP地址
- cert-manager
- 安裝教程
- 頒布者CA
- 創建證書
- 外部存儲
- 對接NFS
- 對接ceph-rbd
- 對接cephfs
- 監控平臺
- Prometheus
- Prometheus安裝
- grafana安裝
- Prometheus配置文件
- node_exporter安裝
- kube-state-metrics安裝
- Prometheus黑盒監控
- Prometheus告警
- grafana儀表盤設置
- 常用監控配置文件
- thanos
- Prometheus
- Sidecar組件
- Store Gateway組件
- Querier組件
- Compactor組件
- Prometheus監控項
- grafana
- Querier對接grafana
- alertmanager
- Prometheus對接alertmanager
- 日志中心
- filebeat安裝
- kafka安裝
- logstash安裝
- elasticsearch安裝
- elasticsearch索引生命周期管理
- kibana安裝
- event事件收集
- 資源預留
- 節點資源預留
- imagefs與nodefs驗證
- 資源預留 vs 驅逐 vs OOM
- scheduler調度原理
- Helm
- Helm安裝
- Helm基本使用
- 安全
- apiserver審計日志
- RBAC鑒權
- namespace資源限制
- 加密Secret數據
- 服務網格
- 備份恢復
- Velero安裝
- 備份與恢復
- 常用維護操作
- container runtime
- 拉取私有倉庫鏡像配置
- 拉取公網鏡像加速配置
- runtime網絡代理
- overlay2目錄占用過大
- 更改Docker的數據目錄
- Harbor
- 重置Harbor密碼
- 問題處理
- 關閉或開啟Harbor的認證
- 固定harbor的IP地址范圍
- ETCD
- ETCD擴縮容
- ETCD常用命令
- ETCD數據空間壓縮清理
- ingress
- ingress-nginx header配置
- kubernetes
- 驗證yaml合法性
- 切換KubeProxy模式
- 容器解析域名
- 刪除節點
- 修改鏡像倉庫
- 修改node名稱
- 升級k8s集群
- 切換容器運行時
- apiserver接口
- 其他
- 升級內核
- k8s組件性能分析
- ETCD
- calico
- calico健康檢查失敗
- Harbor
- harbor同步失敗
- Kubernetes
- 資源Terminating狀態
- 啟動容器報錯