RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03
| 192.168.0.115 | mq-01 | rabbitmq master | 5672 | http://192.168.0.115:15672 | guest | guest |
| 192.168.0.117 | mq-02 | rabbitmq slave | 5672 | http://192.168.0.117:15672 | guest | guest |
| 192.168.0.118 | mq-03 | rabbitmq slave | 5672 | http://192.168.0.118:15672 | guest | guest |
| 192.168.0.119 | hk-01 | haproxy+keepalived | 8100 | http://192.168.0.119:8100/rabbitmq-stats | admin | 123456 |
| 192.168.0.120 | hk-02 | haproxy+keepalived | 8100 | http://192.168.0.120:8100/rabbitmq-stats | admin | 123456 |
| sudo service keepalived start | 啟動keepalived 服務 |
| sudo service keepalived stop | 停止keepalived 服務 |
| sudo service keepalived restart | 重新啟動keepalived 服務 |
| sudo service keepalived status | 查看keepalived 服務運行狀態 |
| sudo chkconfig keepalived on | keepalived 服務開機啟動 |
| sudo systemctl start keepalived.service | 啟動keepalived 服務 |
| sudo systemctl stop keepalived.service | 停止keepalived 服務 |
| sudo systemctl restart keepalived.service | 重新啟動keepalived 服務 |
| sudo systemctl status keepalived.service | 查看keepalived 服務運行狀態 |
| sudo systemctl enable keepalived.service | keepalived 服務開機啟動 |
| sudo systemctl disable keepalived.service | keepalived 取消服務開機啟動 |
接上一篇:RabbitMQ+haproxy+keeplived 高可用負載均衡+鏡像集群模式_集成負載均衡組件 Ha-Proxy_02
文章目錄
- 一、Keepalived簡介
- 二、Keepalived 安裝實戰
- 2.1. 安裝所需軟件包
- 2.2. 下載keepalived 軟件包
- 2.3. 同步keepalived 軟件包
- 2.4. 解壓、編譯、安裝 keepalived
- 三、將keepalived安裝成Linux系統服務
- 3.1. 首先創建文件夾,將keepalived配置文件進行復制
- 3.2. 然后復制 keepalived 腳本文件
- 3.3. keepalived 服務設置開機啟動
- 四、配置+修改 Keepalived配置文件
- 4.1. 創建并編輯keepalived.conf文件
- 4.2. 119 服務器配置
- 4.3. 同步+修改 keepalived.conf 配置文件
- 4.4. 執行腳本編寫
- 4.5. 腳本說明:
- 4.6. 執行腳本賦權
- 五、啟動 keepalived 服務
- 5.1. 查看haproxy啟運行狀態
- 5.2. 啟動keepalived
- 5.3. 查看keepalived運行狀態
- 六、測試+驗證 keepalived 單點故障轉移
- 6.1. 正常場景測試
- 6.2. 主節點出現單點故障 測試
- 6.3. 主節點出現單點故障+重新啟動主節點 測試
一、Keepalived簡介
Keepalived,它是一個高性能的服務器高可用或熱備解決方案,Keepalived主要來防止服務器單點故障的發生問題,可以通過其與Nginx、Haproxy等反向代理的負載均衡服務器配合實現web服務端的高可用。Keepalived以VRRP協議為實現基礎,用VRRP協議來實現高可用性(HA).VRRP(Virtual Router Redundancy Protocol)協議是用于實現路由器冗余的協議,VRRP協議將兩臺或多臺路由器設備虛擬成一個設備,對外提供虛擬路由器IP(一個或多個)。二、Keepalived 安裝實戰
PS:下載地址:
http://www.keepalived.org/download.html
2.1. 安裝所需軟件包
yum install -y openssl openssl-devel2.2. 下載keepalived 軟件包
wget https://www.keepalived.org/software/keepalived-2.0.20.tar.gz2.3. 同步keepalived 軟件包
為了節省時間,將此軟件包同步120服務器
scp keepalived-2.0.20.tar.gz root@192.168.0.120:/app/software2.4. 解壓、編譯、安裝 keepalived
# 解壓keepalived tar -zxf keepalived-2.0.20.tar.gz -C /app/ # 編譯、安裝 keepalived cd keepalived-2.0.20/ && ./configure --prefix=/app/keepalived make && make install三、將keepalived安裝成Linux系統服務
將keepalived安裝成Linux系統服務,因為沒有使用keepalived的默認安裝路徑(默認路徑:/usr/local),安裝完成之后,需要做一些修改工作
3.1. 首先創建文件夾,將keepalived配置文件進行復制
# 創建文件夾 mkdir /etc/keepalived # 將keepalived配置文件進行復制 cp /app/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/3.2. 然后復制 keepalived 腳本文件
cp /app/keepalived-2.0.20/keepalived/etc/init.d/keepalived /etc/init.d/ cp /app/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ # 建立軟連接 ln -s /app/sbin/keepalived /usr/sbin/ # 由于系統默認建立軟連接,因此需要先刪除默認的 rm -f /sbin/keepalived # 和自己指定安裝的腳本文件建立軟連接 ln -s /app/keepalived/sbin/keepalived /sbin/3.3. keepalived 服務設置開機啟動
可以設置開機啟動:systemctl enable keepalived.service,到此我們安裝完畢!
systemctl enable keepalived.service四、配置+修改 Keepalived配置文件
PS:修改keepalived.conf配置文件
4.1. 創建并編輯keepalived.conf文件
vim /etc/keepalived/keepalived.conf4.2. 119 服務器配置
! Configuration File for keepalivedglobal_defs {router_id hk-01 ##標識節點的字符串,通常為hostname}vrrp_script chk_haproxy {script "/etc/keepalived/haproxy_check.sh" ##執行腳本位置interval 2 ##檢測時間間隔weight -20 ##如果條件成立則權重減20 } #監測haproxy進程狀態,每2秒執行一次 vrrp_instance VI_1 {state MASTER ## 主節點為MASTER,備份節點為BACKUPinterface ens33 ## 綁定虛擬IP的網絡接口(網卡),與本機IP地址所在的網絡接口相同(我這里是eth0)virtual_router_id 119 ## 虛擬路由ID號(主備節點一定要相同)mcast_src_ip 192.168.0.119 ## 本機ip地址priority 100 ##優先級配置(0-254的值)nopreemptadvert_int 1 ## 組播信息發送間隔,倆個節點必須配置一致,默認1s authentication { ## 認證匹配auth_type PASSauth_pass ncl@1234}track_script {chk_haproxy}virtual_ipaddress {192.168.0.112 ## 虛擬ip,可以指定多個} }4.3. 同步+修改 keepalived.conf 配置文件
將此配置文件同步120服務器
scp keepalived.conf root@192.168.0.120:/etc/keepalived/修改第一處:將router_id 修改為120 服務器hostname
修改第二處:mcast_src_ip 修改為120 本機ip地址
修改第三處:priority 修改為 90 ## 主節點 100 從節點 90
修改第四處:state 將 MASTER 修改為BACKUP
120服務器配置:
! Configuration File for keepalivedglobal_defs {router_id hk-02 ##標識節點的字符串,通常為hostname} #監測haproxy進程狀態,每2秒執行一次 vrrp_script chk_haproxy {script "/etc/keepalived/haproxy_check.sh" ##執行腳本位置interval 2 ##檢測時間間隔weight -20 ##如果條件成立則權重減20 }vrrp_instance VI_1 {state BACKUP ## 主節點為MASTER,備份節點為BACKUPinterface ens33 ## 綁定虛擬IP的網絡接口(網卡),與本機IP地址所在的網絡接口相同(我這里是eth0)virtual_router_id 119 ## 虛擬路由ID號(主備節點一定要相同)mcast_src_ip 192.168.0.120 ## 本機ip地址priority 90 ##優先級配置(0-254的值)nopreemptadvert_int 1 ## 組播信息發送間隔,倆個節點必須配置一致,默認1s authentication { ## 認證匹配auth_type PASSauth_pass ncl@1234}track_script {chk_haproxy}virtual_ipaddress {192.168.0.112 ## 虛擬ip,可以指定多個} }4.4. 執行腳本編寫
PS:添加文件位置為/etc/keepalived/haproxy_check.sh(119、120兩個節點文件內容一致即可)
vim /etc/keepalived/haproxy_check.sh #!/bin/bash COUNT=`ps -C haproxy --no-header |wc -l` if [ $COUNT -eq 0 ];then/app/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfgsleep 2if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];thenkillall keepalivedfi fi4.5. 腳本說明:
4.6. 執行腳本賦權
PS:haproxy_check.sh腳本授權,賦予可執行權限.
chmod +x /etc/keepalived/haproxy_check.sh五、啟動 keepalived 服務
PS:啟動keepalived之前先檢查haproxy啟運行狀態
5.1. 查看haproxy啟運行狀態
命令查看方式:
ps -ef | grep haproxy
瀏覽器查看方式:
從上面可以看出haproxy 已經正常運行
5.2. 啟動keepalived
PS:當我們啟動倆個haproxy節點以后,我們可以啟動keepalived服務程序
# 啟動兩臺機器的keepalived service keepalived start5.3. 查看keepalived運行狀態
ps -ef | grep keepalived
六、測試+驗證 keepalived 單點故障轉移
6.1. 正常場景測試
測試結果預測:
keepalived 服務正常啟動,虛擬ip在 主節點的服務器上(119 服務器)在119服務器 查看虛擬IP
ip a在120服務器上是沒有虛擬IP的(除非主節點119 keepalived服務停止)
在119服務器 查看虛擬IP
6.2. 主節點出現單點故障 測試
測試結果預測:
1. keepalived 服務正常啟動,虛擬ip在 主節點的服務器上(119 服務器) 2. 當主節點出現單點故障后,虛擬ip會漂移到BACKUP節點模擬虛擬IP漂移到 120服務器場景
停止主節點119 keepalived服務停止
service keepalived stop再次查看119 是否存在虛擬IP
ip a
再次查看120 虛擬IP是否漂移成功
從上面模擬測試結果,符合咱們的預測!!!
6.3. 主節點出現單點故障+重新啟動主節點 測試
1. keepalived 服務正常啟動,虛擬ip在 主節點的服務器上(119 服務器) 2. 當主節點出現單點故障后,虛擬ip會漂移到BACKUP節點 3. 當主節點單點故障修復后,由于咱們設置了權重,主節點權重比從節點權重大,因此,虛擬IP會重新回到主節點服務器上二次模擬測試,權重是否設置正常
119 權重權重 100
120 設置權重 90
yuce測試結果:
當主節點119 keepalived節點再次啟動,虛擬ip又會回到主節點服務器上
主節點119服務器再次啟動 keepalived 服務,進行模擬測試
[root@hk-01 keepalived]# service keepalived start Starting keepalived (via systemctl): [ OK ] [root@hk-01 keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:2a:fc:5d brd ff:ff:ff:ff:ff:ffinet 192.168.0.119/24 brd 192.168.0.255 scope global noprefixroute dynamic ens33valid_lft 70561sec preferred_lft 70561secinet 192.168.0.112/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::ac32:9647:2dd9:bed5/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@hk-01 keepalived]#120 服務器 查看測試結果
[root@hk-02 keepalived]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:f9:d8:e3 brd ff:ff:ff:ff:ff:ffinet 192.168.0.120/24 brd 192.168.0.255 scope global noprefixroute dynamic ens33valid_lft 70574sec preferred_lft 70574secinet6 fe80::ac32:9647:2dd9:bed5/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::c92d:38e3:9ea0:a936/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@hk-02 keepalived]#總結
以上是生活随笔為你收集整理的RabbitMQ+haproxy+keeplived 高可用负载均衡+镜像集群模式_集成高性能高可用组件 Keepalived_03的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: SpringBoot集成Flowable
- 下一篇: 第14篇:Flowable-BPMN操作