### kube-apiserver 高可用
按照上面的方式在master01與master02機器上安裝kube-apiserver、kube-controller-manager、kube-scheduler,但是現在我們還是手動指定訪問的6443和8080端口的,因為我們的域名k8s-api.virtual.local對應的master01節點直接通過http 和https 還不能訪問,這里我們使用haproxy 來代替請求。
> 意思就是我們需要將http默認的80端口請求轉發到apiserver的8080端口,將https默認的443端口請求轉發到apiserver的6443端口,所以我們這里使用haproxy來做請求轉發。
#### 安裝haproxy
```shell
$ yum install -y haproxy
```
#### 配置Haproxy
```shell
frontend k8s-api
bind 192.168.10.55:443
mode tcp
option tcplog
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
default_backend k8s-api
backend k8s-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-api-1 192.168.10.55:6443 check
server k8s-api-2 192.168.10.56:6443 check
frontend k8s-http-api
bind 192.168.10.55:80
mode tcp
option tcplog
default_backend k8s-http-api
backend k8s-http-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-http-api-1 192.168.10.55:8080 check
server k8s-http-api-2 192.168.10.56:8080 check
```
#### 啟動Haproxy
```shell
$ sudo systemctl start haproxy
$ sudo systemctl enable haproxy
$ sudo systemctl status haproxy
```
> 然后我們可以通過上面9000端口監控我們的haproxy的運行狀態(192.168.10.65:9000/stats):

#### 安裝keepalived
> KeepAlived 是一個高可用方案,通過 VIP(即虛擬 IP)和心跳檢測來實現高可用。其原理是存在一組(兩臺)服務器,分別賦予 Master、Backup 兩個角色,默認情況下Master 會綁定VIP 到自己的網卡上,對外提供服務。Master、Backup 會在一定的時間間隔向對方發送心跳數據包來檢測對方的狀態,這個時間間隔一般為 2 秒鐘,如果Backup 發現Master 宕機,那么Backup 會發送ARP 包到網關,把VIP 綁定到自己的網卡,此時Backup 對外提供服務,實現自動化的故障轉移,當Master 恢復的時候會重新接管服務。非常類似于路由器中的虛擬路由器冗余協議(VRRP)
**開啟路由轉發,這里我們定義虛擬IP為:192.168.10.69**
```shell
$ vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {
}
router_id kube_api
}
vrrp_script check_k8s {
script "/etc/keepalived/chk_k8s_master.sh"
interval 3
weight 5
}
vrrp_instance APISERVER {
unicast_src_ip 192.168.10.55
unicast_peer {
192.168.10.56
}
state BACKUP
interface enp6s0
virtual_router_id 41
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1234
}
virtual_ipaddress {
192.168.10.69 dev enp6s0 label enp6s0:vip
}
track_script {
check_k8s
}
}
```
**啟動keeplived**
```shell
$ systemctl start keepalived
$ systemctl enable keepalived
# 查看日志
$ journalctl -f -u keepalived
```
**驗證虛擬ip配置是否正確**
```shell
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:60:6e:46:7a:c0 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.55/24 brd 192.168.10.255 scope global enp6s0
valid_lft forever preferred_lft forever
inet6 fe80::58c6:152e:edc0:4c4c/64 scope link
valid_lft forever preferred_lft forever
3: enp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 08:60:6e:46:7a:c1 brd ff:ff:ff:ff:ff:ff
```
- Docker
- Docker入門
- docker管理UI
- 封裝各大數據組件
- 自主封裝
- 封裝hadoop
- 封裝spark
- 官方封裝
- 封裝hue
- 封裝jenkins
- Swarm
- Swarm入門
- Zookeeper on swarm
- Hue on swarm
- Grafana
- influxDB
- Prometheus
- cAdvisor
- kubernetes
- k8s入門
- k8s部署dashboard
- minikube
- 手動搭建k8s的高可用集群
- 01環境準備
- 02部署etcd集群
- 03配置kubelet
- 04部署flannel網絡
- 05部署master集群
- 06配置高可用
- 07部署node節點
- 08驗證集群
- Monitor
- swarm 監控
- influxDB+Grafana
- Prometheus+Grafana