<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                在云端上HAproxy的每個實例將其前端配置到虛擬IP地址上,根據修改haroxy配置文件,使得Haproxy后端是用于負載平衡的真實實例IP地址的列表。通過使用負載均衡器,在發生單點故障的時候,可以快速的切換到其他的節點。 1)安裝Haproxy 在三個節點上分別安裝HAProxy ``` yum -y install haproxy systemctl enable haproxy.service ``` 2)跟rsyslog結合配置haproxy日志,在三個節點上都操作 ``` cd /etc/rsyslog.d/ vim haproxy.conf ``` 添加: ``` $ModLoad imudp $UDPServerRun 514 $template Haproxy,"%rawmsg%\n" local0.=info -/var/log/haproxy.log;Haproxy local0.notice -/var/log/haproxy-status.log;Haproxy local0.* ~ ``` ``` systemctl restart rsyslog.service systemctl enable rsyslog.service systemctl status rsyslog.service ``` 3)在是三個節點上配置haproxy.cfg ``` cd /etc/haproxy/ mv haproxy.cfg haproxy.cfg.orig vim haproxy.cfg ``` 添加下面內容(**注意更改ip**): ``` global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 16000 chroot /usr/share/haproxy user haproxy group haproxy daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 10000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend stats-front bind *:8088 mode http default_backend stats-back backend stats-back mode http balance source stats uri /haproxy/stats stats auth admin:yjscloud listen RabbitMQ-Server-Cluster bind 192.168.0.168:56720 mode tcp balance roundrobin option tcpka server controller1 controller1:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen RabbitMQ-Web bind 192.168.0.168:15673 mode tcp balance roundrobin option tcpka server controller1 controller1:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen Galera-Cluster bind 192.168.0.168:3306 balance leastconn mode tcp option tcplog option httpchk server controller1 controller1:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3 server controller2 controller2:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3 server controller3 controller3:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3 listen keystone_admin_cluster bind 192.168.0.168:35357 balance source option httpchk option httplog option httpclose server controller1 controller1:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 server controller2 controller2:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 server controller3 controller3:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 listen keystone_public_internal_cluster bind 192.168.0.168:5000 balance source option httpchk option httplog option httpclose server controller1 controller1:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 server controller2 controller2:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 server controller3 controller3:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 listen Memcache_Servers bind 192.168.0.168:22122 balance roundrobin mode tcp option tcpka server controller1 controller1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 server controller2 controller2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 server controller3 controller3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3 listen dashboard_cluster bind 192.168.0.168:80 balance source option httpchk option httplog option httpclose server controller1 controller1:8080 check inter 2000 fall 3 server controller2 controller2:8080 check inter 2000 fall 3 server controller3 controller3:8080 check inter 2000 fall 3 listen glance_api_cluster bind 192.168.0.168:9292 balance source option httpchk option httplog option httpclose server controller1 controller1:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen glance_registry_cluster bind 192.168.0.168:9090 balance roundrobin mode tcp option tcpka server controller1 controller1:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen nova_compute_api_cluster bind 192.168.0.168:8774 balance source option httpchk option httplog option httpclose server controller1 controller1:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen nova-metadata-api_cluster bind 192.168.0.168:8775 balance source option httpchk option httplog option httpclose server controller1 controller1:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen nova_vncproxy_cluster bind 192.168.0.168:6080 balance source option tcpka option tcplog server controller1 controller1:6080 check inter 2000 rise 2 fall 5 server controller2 controller2:6080 check inter 2000 rise 2 fall 5 server controller3 controller3:6080 check inter 2000 rise 2 fall 5 listen neutron_api_cluster bind 192.168.0.168:9696 balance source option httpchk option httplog option httpclose server controller1 controller1:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 listen cinder_api_cluster bind 192.168.0.168:8776 balance source option httpchk option httplog option httpclose server controller1 controller1:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller2 controller2:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 server controller3 controller3:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3 ``` 將配置文件拷貝到其他兩個節點并重啟haproxy服務 ``` scp -p /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg scp -p /etc/haproxy/haproxy.cfg controller3:/etc/haproxy/haproxy.cfg systemctl restart haproxy.service systemctl status haproxy.service ``` 參數解釋: - inter <delay>:設定健康狀態檢查的時間間隔,單位為毫秒,默認為2000;也可以使用fastinter和downinter來根據服務器端狀態優化此時間延遲 - rise <count>:設定健康狀態檢查中,某離線的server從離線狀態轉換至正常狀態需要成功檢查的次數。 - fall <count>:確認server從正常狀態轉換為不可用狀態需要檢查的次數。 4)配置Haproxy能監控Galera數據庫集群 數據庫如果無法正常啟動請重啟三個節點的mysql 其他兩個節點分別啟動mariadb ``` systemctl start mariadb.service systemctl status mariadb.service ``` 最后,其他兩個節點啟動成功了,再回到第一個節點執行: ``` pkill -9 mysql pkill -9 mysql systemctl start mariadb.service systemctl status mariadb.service ``` 在controller1上進入mysql,創建clustercheck ``` grant process on *.* to 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!'; flush privileges; ``` 三個節點分別創建clustercheck文本,里面是clustercheckuser用戶和密碼 ``` vim /etc/sysconfig/clustercheck ``` 添加: ``` MYSQL_USERNAME=clustercheckuser MYSQL_PASSWORD=clustercheckpassword! MYSQL_HOST=localhost MYSQL_PORT=3306 ``` 確認下是否存在`/usr/bin/clustercheck`腳本,如果沒有從網上下載一個,然后放到`/usr/bin`目錄下面,記得`chmod +x /usr/bin/clustercheck`賦予權限 這個腳本的作用就是讓haproxy能監控Galera cluster狀態 ``` scp -p /usr/bin/clustercheck controller2:/usr/bin/clustercheck scp -p /usr/bin/clustercheck controller3:/usr/bin/clustercheck ``` 在controller1上檢查haproxy服務狀態 ``` clustercheck ``` ![8-1-18](http://pded8ke3e.bkt.clouddn.com/8-1-18.jpg) 結合xinetd監控Galera服務(三個節點都安裝xinetd) ``` yum -y install xinetd vim /etc/xinetd.d/mysqlchk ``` 添加以下內容: ``` # default: on # # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } ``` 拷貝配置文件到其他節點 ``` scp -p /etc/xinetd.d/mysqlchk controller2:/etc/xinetd.d/mysqlchk scp -p /etc/xinetd.d/mysqlchk controller3:/etc/xinetd.d/mysqlchk ``` ``` vim /etc/services ``` 最后一行添加:`mysqlchk??????????????9200/tcp #mysqlchk` ``` scp -p /etc/services controller2:/etc/services scp -p /etc/services controller3:/etc/services ``` 重啟xinetd服務 ``` systemctl enable xinetd.service systemctl restart xinetd.service systemctl status xinetd.service ``` 5)三個節點修改內網參數 ``` echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf echo 'net.ipv4.ip_forward = 1'>>/etc/sysctl.conf sysctl -p ``` 第一個參數的意思是設置haproxy能夠綁定到不屬于本地網卡的地址上 第二個參數的意思是設置內核是否轉發數據包,默認是禁止的,這里我們設置打開。 注意!如果不設置這兩個參數,你的第二個第三個ctr節點haproxy服務將啟動不了 6)三節點啟動haproxy服務 ``` systemctl restart haproxy.service systemctl status haproxy.service ``` 7)訪問haproxy前端web平臺 ``` http://192.168.0.168:8088/haproxy/stats admin/yjscloud ``` Galera集群服務已經監控成功 ![8-1-19](http://pded8ke3e.bkt.clouddn.com/8-1-19.jpg)
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看