在云端上HAproxy的每個實例將其前端配置到虛擬IP地址上,根據修改haroxy配置文件,使得Haproxy后端是用于負載平衡的真實實例IP地址的列表。通過使用負載均衡器,在發生單點故障的時候,可以快速的切換到其他的節點。
1)安裝Haproxy
在三個節點上分別安裝HAProxy
```
yum -y install haproxy
systemctl enable haproxy.service
```
2)跟rsyslog結合配置haproxy日志,在三個節點上都操作
```
cd /etc/rsyslog.d/
vim haproxy.conf
```
添加:
```
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%rawmsg%\n"
local0.=info -/var/log/haproxy.log;Haproxy
local0.notice -/var/log/haproxy-status.log;Haproxy
local0.* ~
```
```
systemctl restart rsyslog.service
systemctl enable rsyslog.service
systemctl status rsyslog.service
```
3)在是三個節點上配置haproxy.cfg
```
cd /etc/haproxy/
mv haproxy.cfg haproxy.cfg.orig
vim haproxy.cfg
```
添加下面內容(**注意更改ip**):
```
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 16000
chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 10000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend stats-front
bind *:8088
mode http
default_backend stats-back
backend stats-back
mode http
balance source
stats uri /haproxy/stats
stats auth admin:yjscloud
listen RabbitMQ-Server-Cluster
bind 192.168.0.168:56720
mode tcp
balance roundrobin
option tcpka
server controller1 controller1:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen RabbitMQ-Web
bind 192.168.0.168:15673
mode tcp
balance roundrobin
option tcpka
server controller1 controller1:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen Galera-Cluster
bind 192.168.0.168:3306
balance leastconn
mode tcp
option tcplog
option httpchk
server controller1 controller1:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
server controller2 controller2:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
server controller3 controller3:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3
listen keystone_admin_cluster
bind 192.168.0.168:35357
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller2 controller2:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller3 controller3:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
listen keystone_public_internal_cluster
bind 192.168.0.168:5000
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller2 controller2:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller3 controller3:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
listen Memcache_Servers
bind 192.168.0.168:22122
balance roundrobin
mode tcp
option tcpka
server controller1 controller1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller2 controller2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server controller3 controller3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
listen dashboard_cluster
bind 192.168.0.168:80
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:8080 check inter 2000 fall 3
server controller2 controller2:8080 check inter 2000 fall 3
server controller3 controller3:8080 check inter 2000 fall 3
listen glance_api_cluster
bind 192.168.0.168:9292
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen glance_registry_cluster
bind 192.168.0.168:9090
balance roundrobin
mode tcp
option tcpka
server controller1 controller1:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen nova_compute_api_cluster
bind 192.168.0.168:8774
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen nova-metadata-api_cluster
bind 192.168.0.168:8775
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen nova_vncproxy_cluster
bind 192.168.0.168:6080
balance source
option tcpka
option tcplog
server controller1 controller1:6080 check inter 2000 rise 2 fall 5
server controller2 controller2:6080 check inter 2000 rise 2 fall 5
server controller3 controller3:6080 check inter 2000 rise 2 fall 5
listen neutron_api_cluster
bind 192.168.0.168:9696
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
listen cinder_api_cluster
bind 192.168.0.168:8776
balance source
option httpchk
option httplog
option httpclose
server controller1 controller1:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller2 controller2:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
server controller3 controller3:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3
```
將配置文件拷貝到其他兩個節點并重啟haproxy服務
```
scp -p /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg
scp -p /etc/haproxy/haproxy.cfg controller3:/etc/haproxy/haproxy.cfg
systemctl restart haproxy.service
systemctl status haproxy.service
```
參數解釋:
- inter <delay>:設定健康狀態檢查的時間間隔,單位為毫秒,默認為2000;也可以使用fastinter和downinter來根據服務器端狀態優化此時間延遲
- rise <count>:設定健康狀態檢查中,某離線的server從離線狀態轉換至正常狀態需要成功檢查的次數。
- fall <count>:確認server從正常狀態轉換為不可用狀態需要檢查的次數。
4)配置Haproxy能監控Galera數據庫集群
數據庫如果無法正常啟動請重啟三個節點的mysql
其他兩個節點分別啟動mariadb
```
systemctl start mariadb.service
systemctl status mariadb.service
```
最后,其他兩個節點啟動成功了,再回到第一個節點執行:
```
pkill -9 mysql
pkill -9 mysql
systemctl start mariadb.service
systemctl status mariadb.service
```
在controller1上進入mysql,創建clustercheck
```
grant process on *.* to 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!';
flush privileges;
```
三個節點分別創建clustercheck文本,里面是clustercheckuser用戶和密碼
```
vim /etc/sysconfig/clustercheck
```
添加:
```
MYSQL_USERNAME=clustercheckuser
MYSQL_PASSWORD=clustercheckpassword!
MYSQL_HOST=localhost
MYSQL_PORT=3306
```
確認下是否存在`/usr/bin/clustercheck`腳本,如果沒有從網上下載一個,然后放到`/usr/bin`目錄下面,記得`chmod +x /usr/bin/clustercheck`賦予權限
這個腳本的作用就是讓haproxy能監控Galera cluster狀態
```
scp -p /usr/bin/clustercheck controller2:/usr/bin/clustercheck
scp -p /usr/bin/clustercheck controller3:/usr/bin/clustercheck
```
在controller1上檢查haproxy服務狀態
```
clustercheck
```

結合xinetd監控Galera服務(三個節點都安裝xinetd)
```
yum -y install xinetd
vim /etc/xinetd.d/mysqlchk
```
添加以下內容:
```
# default: on
# # description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
```
拷貝配置文件到其他節點
```
scp -p /etc/xinetd.d/mysqlchk controller2:/etc/xinetd.d/mysqlchk
scp -p /etc/xinetd.d/mysqlchk controller3:/etc/xinetd.d/mysqlchk
```
```
vim /etc/services
```
最后一行添加:`mysqlchk??????????????9200/tcp #mysqlchk`
```
scp -p /etc/services controller2:/etc/services
scp -p /etc/services controller3:/etc/services
```
重啟xinetd服務
```
systemctl enable xinetd.service
systemctl restart xinetd.service
systemctl status xinetd.service
```
5)三個節點修改內網參數
```
echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf
echo 'net.ipv4.ip_forward = 1'>>/etc/sysctl.conf
sysctl -p
```
第一個參數的意思是設置haproxy能夠綁定到不屬于本地網卡的地址上
第二個參數的意思是設置內核是否轉發數據包,默認是禁止的,這里我們設置打開。
注意!如果不設置這兩個參數,你的第二個第三個ctr節點haproxy服務將啟動不了
6)三節點啟動haproxy服務
```
systemctl restart haproxy.service
systemctl status haproxy.service
```
7)訪問haproxy前端web平臺
```
http://192.168.0.168:8088/haproxy/stats admin/yjscloud
```
Galera集群服務已經監控成功

- 獻給我的朋友們
- 一、個人對學習的看法
- 二、運維技能圖譜
- 三、運維常用技能
- 3.1 Vim(最好用的編輯器)
- 3.2 Nginx & Tengine(Web服務)
- 1. Nginx介紹和部署
- 2. Nginx配置解析
- 3. Nginx常用模塊
- 4. Nginx 的session 一致性問題
- 3.3 Tomcat(Web中間件)
- 3.4 Keepalived(負載均衡高可用)
- 3.5 Memcache(分布式緩存)
- 3.6 Zookeeper(分布式協調系統)
- 3.7 KVM(開源虛擬化)
- 1. 虛擬化介紹
- 2. KVM基礎
- 3. 設置VNC和時間同步
- 4. kvm虛擬機快照備份
- 5. kvm虛擬機在線擴展磁盤
- 6. kvm虛擬機靜態遷移
- 7. kvm虛擬機動態遷移
- 8. kvm虛擬機存儲池配置
- 9. cpu添加虛擬化功能
- 3.8 GitLab(版本控制)
- 3.8.1 GitLab安裝與漢化
- 3.9 Jenkins(運維自動化)
- 3.10 WAF(Web防火墻)
- 3.10.1初探WAF
- 四、常用數據庫
- 4.1 MySQL(關系型數據庫)
- 1. MySQL源碼安裝
- 4.2 Mongodb(適用與大數據分析的數據庫)
- 4.3 Redis(非關系數據庫)
- 五、自動化運維工具
- 5.1 Cobbler(系統自動化部署)
- 5.2 Ansible(自動化部署)
- 5.3 Puppet(自動化部署)
- 5.4 SaltStack(自動化運維)
- 六、存儲
- 6.1 GFS(文件型存儲)
- 6.2 Ceph(后端存儲)
- 七、運維監控工具
- 7.1 對監控的理解
- 7.2 Zabbix(運維監控)
- 7.2.1 Zabbix簡介
- 7.2.2 Zabbix服務部署
- 1. Zabbix服務端部署
- 2. Zabbix客服端部署
- 3. 配置前端展示
- 4. zabbix告警配置
- 7.2.3 Zabbix監控服務
- 1. 監控網絡設備
- 2. 自定義Nginx監控
- 7.3 云鏡(安全監控)
- 7.4 ELK(日志收集展示)
- 八、運維云平臺
- 8.1 OpenStack(開源云操作系統)
- 8.1.1 OpenStack簡介
- 8.1.2 實驗架構設計
- 8.1.3 集群環境準備
- 8.1.4 controller節點部署
- 1. 安裝Mariadb Galera Cluster集群
- 2. 安裝RabbitMQ Cluster集群
- 3. 安裝Pacemaker
- 4. 安裝HAProxy
- 5. 安裝配置Keystone
- 6. 安裝配置glance
- 1. 制作鏡像模板
- 7. 安裝配置nova
- 8. 安裝配置neutron
- 1. 配置虛擬機網絡
- 9. 安裝Dashboard
- 10. 安裝配置cinder
- 8.1.5 compute節點部署
- 1. 安裝相關軟件包
- 2. 安裝Neutron
- 3. 配置cinder
- 4. 創建第一個虛擬機
- 8.1.6 OpenStack報錯處理
- 1. cinder僵尸卷刪除
- 8.1.7 快速孵化虛擬機方案
- 8.1.8 Kolla容器化部署OpenStack
- 1. 單點部署
- 2. 多節點部署
- 8.2 Tstack(騰訊云平臺)
- 8.3 K8s(微服務容器化)
- 九、運維編程技能
- 9.1 Shell(運維必會語言)
- 9.2 Python(萬能的膠水語言)
- 十、Devops運維
- 10.1 理念
- 10.2 Devops實戰