# 講師:張長志
區塊鏈、大數據項目講師, Java開發、10余年軟件研發及企業培訓經驗,曾為多家大型企業提供企業內訓。
擅長領域
python領域
Java領域: SSM、SpringBoot、SpringCloud等java體現;
大數據:Hadoop、HDFS、MapReduce、HBase、Kafka、Spark、CDH 5.3.x集群;
10余年軟件研發及企業培訓經驗,豐富的企業應用軟件開發經驗、深厚的軟件架構設計理論基礎及實踐能力。
為中石化,中國聯通,中國移動等知名企業提供企業培訓服務。
項目開發歷程:基于大數據技術推薦系統 ,電商大數據分析與統計推斷,H5跨平臺APP,電信系統,go語言實現storm和zk類似框。
# 1.通過圖形方式快速了解k8s

# 2系統架構

# 3組建對應的功能
## 3.1 Master(主節點)
- kube-apiserver
Kubernetes API,集群的統一入口,各個組件的協調,以http api提供接口服務,所有對象資源的增刪改查和監聽操作都交給APIserver處理后提供給etcd。
- kube-controller-manager
處理集群中常規的后臺服務,一個資源對應的一個控制器,而cm就是負責管理這些控制器的。
- kube-schedule
根據調度算法為新創的pod選擇一個node節點
## 3.3 worker節點
+ Kubelet
kubelet是master在woker節點的angent,管理本機運行容器的生命周期。比如創建容器,pod掛載數據卷,下載secret,獲取容器和節點的狀態。kubelet將每個pod轉換成一組容器。
+ kube-proxy
在woker節點上實現pod網絡代理,維護網絡規劃和四層負載均衡工作
+ docker engine
運行容器
## 第三方服務
+ etcd
分布式鍵值對存儲系統,用于保存機器的機器狀態,比如pod service的信息。
# 卸載k8s
```
kubectl delete node --all
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
rm -rf /etc/systemd/system/docker.service.d
rm -rf /var/lib/docker
rm -rf /var/run/docker
```
# k8s集群部署
## 1、集群環境
系統:centos7u4
本次使用三臺機器用于部署k8s的運行環境,1臺master,2臺node。具體如下表
| 節點名稱 | 主機名 | IP |
| ---------------------- | ---------- | -------------- |
| master,etcd,registry | k8s-master | 192.168.28.201 |
| Node1 | k8s-node1 | 192.168.28.202 |
| Node2 | k8s-node2 | 192.168.28.203 |
## 2.說明
Kubenetes 工作模式是:server-client模式
Kubenets Master提供了集中化管理Minions。
Kubenets集群組建:
- etcd一個高可用的K/V鍵值對存儲和服務發現的系統
- flannel實現夸主機的容器網絡通信
- kube-apiserver 提供kubernetes集群的API調用
- Kube-controller-manager確保集群服務
- kube-sheduler 容器調度,分配到Node
- Kubelet在node節點上按照配置文件中定義的規則啟動容器
- kube-proxy提供網絡代理服務
## 3.設置三臺機器的主機名
免密碼登錄
```
ssh 免密碼登錄
```
master執行
```
hostnamectl --static set-hostname k8s-master
```
slaves上執行
```
hostnamectl --static set-hostname k8s-node-1
hostnamectl --static set-hostname k8s-node-2
```
## 4.修改master和slave的hosts
在master和slave的`/etc/hosts`文件中均加入以下內容:
```
192.168.28.251 etcd
192.168.28.251 registry
192.168.28.251 k8s-master
192.168.28.252 k8s-node-1
192.168.28.253 k8s-node-2
```
## 5.關閉防火墻和selinux
```
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
```
## 6.安裝epel-release源
```
yum -y install epel-release
```
# 部署master
## 1.使用yum安裝etcd
etcd 服務作為Kubernetes集群的主數據庫,在安裝Kubernetes各服務之前首先安裝和啟動。
```
yum -y install etcd
```
## 2.編輯/etc/etcd/etcd.conf文件
```
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#ETCD_ENABLE_V2="true"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[auth]
#ETCD_AUTH_TOKEN="simple"
```
- 主要修改
```
ETCD_NAME=master
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
```
## 3.啟動etcd服務
```
systemctl start etcd
```
## 4.驗證
```
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0
再獲取etcd的健康指標看看:
etcdctl -C http://etcd:2379 cluster-health
etcdctl -C http://etcd:4001 cluster-health
```
## 5.**docker安裝**
```
yum install docker -y
```
### 5.1允許從registry中拉取鏡像
```
vi /etc/sysconfig/docker
OPTIONS='--insecure-registry registry:5000'
```
### 5.2 設置開機自啟動并開啟服務
```
systemctl enable docker
systemctl start docker
```
## 6.安裝kubernets
```
yum install kubernetes -y
```
配置kubernetes master上需要運行以下組件
```
kubernetes API Server
kubernetes Controller Manager
kubernetes scheduler
```
相應的要更改以下幾個配置信息:
```
vi /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
```
```
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
```
vi /etc/kubernetes/config
```
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
```
## 6.1啟動kubernetes
啟動
```
systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
```
- 設置K8S各組件開機啟動
```
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service
```
# 部署Slave節點
slave節點需要安裝以下組件:
- docker
- kubernetes
- flannel
## 1.部署docker
```
yum install -y docker
```
```
vi /etc/sysconfig/docker
OPTIONS='--insecure-registry registry:5000'
```
設置開機自動啟動
```
systemctl enable docker
systemctl start docker
```
## 2.安裝配置啟動Kubernetetes(slave節點都要配置)
```
yum install -y kubernetes
```
配置kubernetes slave上需要運行以下組件
```
Kubelet
Kubernetes proxy
```
相應的要更改以下幾個配置信息:
vi /etc/kubernetes/config
```
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
```
```
KUBE_MASTER="--master=http://k8s-master:8080"
```
- 配置`/etc/kubernetes/kubelet`
```
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
```
```
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=k8s-node-1" 自己的主機名稱
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
```
# 3.啟動服務
```
systemctl start kubelet.service
systemctl start kube-proxy.service
```
# 4.設置啟動自檢
```
systemctl enable kubelet.service
systemctl enable kube-proxy.service
```
# 5查看狀態:
在master長查看集群中節點以及節點的狀態
```
kubectl -s http://k8s-master:8080 get nodes
```

```
kubectl get nodes
```

# 創建覆蓋網絡 -flannel
**flannel安裝**
- 在master和node節點執行如下命令,進行安裝
```
yum install flannel -y
```
- 配置flannel:`/etc/sysconfig/flanneld`
```
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
```
- 配置etcd中關于flannel的key(master執行 etcd)
```
etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
etcdctl rm 刪除
etcdctl update 更新(測試時候當網絡不好使用的時候需要刷新)
```
- 啟動flannel并設置開機自啟
```
systemctl start flanneld.service
systemctl enable flanneld.service
```
- 在每個minion節點上,flannel啟動,它從etcd中獲取network配置,并為本地節點產生一個subnet,也保存在etcd中,并且產生/run/flannel/subnet.evn 文件:
```
FLANNEL_NETWORK=10.0.0.0/16 #這是全局的falnnel subnet
FLANNEL_SUBNET=10.0.15.1/24 #這是本節點的falnnel subnet
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
```
## 啟動
啟動Flannel之后,需要依次啟動docker ,kubernete.
在master執行
```
systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-scheduler
systemctl restart kube-controller-manager
```
```
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
```
在node執行
```
systemctl start flanneld && systemctl enable flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy
```
```
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
```
# 驗證
- 查看端點信息:`kubectl get endpoints`
```
kubectl get endpoints
```

- 查看集群信息:`kubectl cluster-info`

- 獲取集群中的節點狀態: `kubectl get nodes`

* 查詢狀態
```
kubectl get componentstatus/cs
```

# 案例部署nginx
docker --registry-mirror=https://registry.docker-cn.com daemon
```
kubectl run nginx --image=nginx --replicas=3
kubectl get pod
kubectl get pod -o wide #在哪個節點運行
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
kubectl get svc nginx
```
kubectl get pod -o wide #在哪個節點運行

kubectl get svc

<https://blog.csdn.net/weixin_34346099/article/details/87525499>
```
setenforce 0
iptables --flush
iptables -tnat --flush
service docker restart
iptables -P FORWARD ACCEPT
```
在我們的work節點上實驗
curl 10.254.90.252:88
```
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
注意:
kubectl get pod

kubectl describe pod
報了一個錯
```
/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory
```
解決錯誤:參考以下網址
<https://www.cnblogs.com/lexiaofei/p/k8s.html>
```
yum install *rhsm* -y
```
```
yum install -y wget
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
```