<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # Master節點高可用 作者:[mendickxiao](https://github.com/mendickxiao) 經過部署Kubernetes集群章節我們已經可以順利的部署一個集群用于開發和測試,但是要應用到生產就就不得不考慮master節點的高可用問題,因為現在我們的master節點上的幾個服務`kube-apiserver`、`kube-scheduler`和`kube-controller-manager`都是單點的而且都位于同一個節點上,一旦master節點宕機,雖然不應答當前正在運行的應用,將導致kubernetes集群無法變更。本文將引導你創建一個高可用的master節點。 在大神gzmzj的ansible創建kubernetes集群神作中有講到如何配置多個Master,但是在實踐過程中還是遇到不少坑。需要將坑填上才能工作。 神作鏈接地址:[集群規劃和基礎參數設定](https://github.com/mendickxiao/kubeasz/blob/master/docs/00-%E9%9B%86%E7%BE%A4%E8%A7%84%E5%88%92%E5%92%8C%E5%9F%BA%E7%A1%80%E5%8F%82%E6%95%B0%E8%AE%BE%E5%AE%9A.md)。 按照神作的描述,實際上是通過keepalived + haproxy實現的,其中keepalived是提供一個VIP,通過VIP關聯所有的Master節點;然后haproxy提供端口轉發功能。由于VIP還是存在Master的機器上的,默認配置API Server的端口是6443,所以我們需要將另外一個端口關聯到這個VIP上,一般用8443。 ![Master HA架構圖](https://box.kancloud.cn/17080750d7e9db744458223b8afa318b_717x508.JPG) 根據神作的實踐,我發現需要在Master手工安裝keepalived, haproxy。 ```bash yum install keepalived yum install haproxy ``` 需要將HAProxy默認的配置文件balance從source修改為`roundrobin`方式。haproxy的配置文件`haproxy.cfg`默認路徑是`/etc/haproxy/haproxy.cfg`。另外需要手工創建`/run/haproxy`的目錄,否則haproxy會啟動失敗。 **注意** - bind綁定的就是VIP對外的端口號,這里是8443。 - balance指定的負載均衡方式是`roundrobin`方式,默認是source方式。在我的測試中,source方式不工作。 - server指定的就是實際的Master節點地址以及真正工作的端口號,這里是6443。有多少臺Master就寫多少條記錄。 ```ini # haproxy.cfg sample global log /dev/log local0 log /dev/log local1 notice ? ? ? ?chroot /var/lib/haproxy ? ? ? ?*stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1 defaults log global timeout connect 5000 timeout client 50000 timeout server 50000 listen kube-master **bind 0.0.0.0:8443** mode tcp option tcplog **balance roundrobin** ? ? ? ?server s1 **Master 1的IP地址**:6443 check inter 10000 fall 2 rise 2 weight 1 ? ? ? ?server s2 **Master 2的IP地址**:6443 check inter 10000 fall 2 rise 2 weight 1 ``` 修改keepalived的配置文件,配置正確的VIP。keepalived的配置文件`keepalived.conf`的默認路徑是`/etc/keepalived/keepalived.conf` **注意** - priority決定哪個Master是主,哪個Master是次。數字小的是主,數字大的是次。數字越小優先級越高。 - `virtual_router_id`決定當前VIP的路由號,實際上VIP提供了一個虛擬的路由功能,該VIP在同一個子網內必須是唯一。 - virtual_ipaddress提供的就是VIP的地址,該地址在子網內必須是空閑未必分配的。 ```ini # keepalived.cfg sample global_defs { router_id lb-backup } vrrp_instance VI-kube-master { state BACKUP ? ?**priority 110** ? ?dont_track_primary interface eth0 ? ?**virtual_router_id 51** ? ?advert_int 3 virtual_ipaddress { ? ? ? ?**10.86.13.36** ? ?} } ``` 配置好后,那么先啟動主Master的keepalived和haproxy。 ```bash systemctl enable keepalived systemctl start keepalived systemctl enable haproxy systemctl start haproxy ``` 然后使用ip a s命令查看是否有VIP地址分配。如果看到VIP地址已經成功分配在eth0網卡上,說明keepalived啟動成功。 ```bash [root@kube32 ~]# ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:a9:d5:be brd ff:ff:ff:ff:ff:ff inet 10.86.13.32/23 brd 10.86.13.255 scope global eth0 valid_lft forever preferred_lft forever ? ?**inet 10.86.13.36/32 scope global eth0** ? ? ? valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fea9:d5be/64 scope link ? ? ? valid_lft forever preferred_lft forever ``` 更保險方法還可以通過`systemctl status keepalived -l`看看keepalived的狀態 ```bash [root@kube32 ~]# systemctl status keepalived -l ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-02-01 10:24:51 CST; 1 months 16 days ago Main PID: 13448 (keepalived) Memory: 6.0M CGroup: /system.slice/keepalived.service ├─13448 /usr/sbin/keepalived -D ├─13449 /usr/sbin/keepalived -D └─13450 /usr/sbin/keepalived -D Mar 20 04:51:15 kube32 Keepalived_vrrp[13450]: VRRP_Instance(VI-kube-master) Dropping received VRRP packet... **Mar 20 04:51:18 kube32 Keepalived_vrrp[13450]: (VI-kube-master): ip address associated with VRID 51 not present in MASTER advert : 10.86.13.36 Mar 20 04:51:18 kube32 Keepalived_vrrp[13450]: bogus VRRP packet received on eth0 !!!** ``` 然后通過systemctl status haproxy -l看haproxy的狀態 ```bash [root@kube32 ~]# systemctl status haproxy -l ● haproxy.service - HAProxy Load Balancer Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-02-01 10:33:22 CST; 1 months 16 days ago Main PID: 15116 (haproxy-systemd) Memory: 3.2M CGroup: /system.slice/haproxy.service ├─15116 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid ├─15117 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds └─15118 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds ``` 這個時候通過kubectl version命令,可以獲取到kubectl的服務器信息。 ```bash [root@kube32 ~]# kubectl version **Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-03T22:31:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-03T22:18:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}** ``` 這個時候說明你的keepalived和haproxy都是成功的。這個時候你可以依次將你其他Master節點的keepalived和haproxy啟動。 此時,你通過ip a s命令去查看其中一臺Master(*非主Master*)的時候,你看不到VIP,這個是正常的,因為VIP永遠只在主Master節點上,只有當主Master節點掛掉后,才會切換到其他Master節點上。 ```bash [root@kube31 ~]# ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:a9:07:23 brd ff:ff:ff:ff:ff:ff inet 10.86.13.31/23 brd 10.86.13.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fea9:723/64 scope link valid_lft forever preferred_lft forever ``` 在我的實踐過程中,通過大神的腳本快速啟動多個Master節點,會導致主Master始終獲取不了VIP,當時的報錯非常奇怪。后來經過我的研究發現,主Master獲取VIP是需要時間的,如果多個Master同時啟動,會導致沖突。這個不知道是否算是Keepalived的Bug。但是最穩妥的方式還是先啟動一臺主Master,等VIP確定后再啟動其他Master比較靠譜。 Kubernetes通過Keepalived + Haproxy實現多個Master的高可用部署,你實際上可以采用其他方式,如外部的負載均衡方式。實際上Kubernetes的多個Master是沒有主從的,都可以提供一致性服務。Keepalived + Haproxy實際上就是提供了一個簡單的負載均衡方式。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看