您好,欢迎来到化拓教育网。
搜索
您的当前位置:首页正文

mysql+keepalivd--nginx+keepalived+lvs+keepalived+mysql

来源:化拓教育网


目录

整体规划分析 优点分析 缺点与不足

Mysql +replication +keepalived实现mysql双主 1.在Http上的配置

2.在Mysql上授权web使用用户 3.程序安装过程中的数据库相关信息

4.备份数据库,在Master 1 和Master 2 上做主从复制 5.配置keepalived实现mysql双主的高可用 6.两边的数据状态 7.测试网站效果 8.测试中遇到的问题

Mysql +lvs+keepalived实现mysql的读高可用 1.配置从库跟主库同步

2.编写一个检查从库状态的脚本(用来检测从库状态) 3.安装配置lvs+keepalived 4.测试keepalived的健康检查功能 5.在Mysql slave端配置 6.可行性测试

使用nginx +keepalived搭建负载均衡 架构分析 主机规划

1.配置nginx实现负载均衡功能

2.然后配置keepalived 这里我们配置双主架构 3.测试keepalived是否正常 4.测试集群整体高可用性

通过画图理解 Keepalived工作

Lvs+keepalived架构数据流向流程

参考地址

http://www.linuxidc.com/Linux/2012-09/70147.htm

整体规划分析

1. 双主Master1和Master2通过keepalived虚拟出的VIP地址对外提供服务 2. Slave1和Slave2 通过主从复制,获取与主库相同数据,对外提供读服务 3. 通过lvs+keepalived 实现mysql读服务的高可用

优点分析

1. 通过读写分离的方式,减轻了主库的压力(设定环境读远大于写) 2. 通过lvs+keepalived实现mysql读服务的高可用

3. 通过keepalived实现了mysql双主的同时对外提供服务,实现了写的高可用(当然还有heartbeat+drdb的方案)

4. 通过lvs+keepalived 的会话保持功能简单的避免了裂脑的风险

缺点与不足

1. 双主同时对外提供服务,必须保持两端数据的一致性。这就要求Master 1 和Master 2 之间必须实时的进行数据的同步。这样一来,当有大量数据插入的时候,Master 1 和Master 2之间既要同步,又要将数据给Slave ,所以Master端压力依然很重,数据延迟是一个问题。

2. Master 和 Slave 之间,也需要做主从同步,势必会导致Master 端的压力过大,所以我们在做监控的时候,应注意slave端与主库的延迟的问题,这里我会用脚本来简单实现的通过检查slave端的状态,来实现自动剔除功能。

3. 同时基于高可用的考虑lvs+keepalived 应尽量也采用主备的模式,因此此方案,的成本控制也会成为一个问题。

4. 如果是基于vip地址的同步,同时设置了lvs的会话保持功能,那Slave会只跟这一台机器同步,如果采用的wrr一类的调度算法,很有可能会使,后端所有的Slave,跟同一台Master 同步,那这台主库的压力就可想而知了。所以在Slave端同步的选择上,必须跟同一台固定的MASTER同步我们也要注意。

5. 当mysql主主切换的时候,主从会出现很多问题。

下面我们就开始基于一个原点,开始逐步配置我们的Mysql高可用

架构思路

1. 首先在Master 1 和Master 2之间做双向同步 2. 配置三个Slave端

3. 在Master 1 和Master 2之间做keeplived的高可用 4. 开发shell 脚本检查slave的状态

5. 做两台lvs+keepalived 主备的 读高可用

Mysql +replication +keepalived实现mysql双主 具体的实施思路

主机地址规划 Master 1 Master 2 Apache 主机IP地址 10.0.0.104 10.0.0.102 10.0.0.120 主机用途 主库 主库 Web服务器 VIP地址 10.0.0.253 10.0.0.253 地

测试程序:Discuz_:下载http://download.comsenz.com/DiscuzX/2.5/Discuz_X2.5_TC_UTF8.zip

1.在Http上的配置

[root@APACHE www]# cat /etc/hosts

# Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 APACHE

::1 localhost6.localdomain6 localhost6 10.0.0.253 mysql.master.vip 此为VIP地址 #10.0.0.252 mysql.slave.vip

[root@APACHE www]# ping mysql.master.vip

PING mysql.master.vip (10.0.0.253) 56(84) bytes of data.

64 bytes from mysql.master.vip (10.0.0.253): icmp_seq=1 ttl=64 time=0.322 ms

2. 在Mysql上授权使用用户(这个两端都要设置,因为我们不同步授权表)

mysql> grant all privileges on web.* to 'web'@'10.0.0.%' identified by '123';

3.程序安装过程中的数据库相关信息

4.备份数据库,在Master 1 和Master 2 上做主从复制

复制前的说明:

①两边都要开启binlog功能 ②server-id不能一样

③建议两端使用同一个用户进行授权 1)在Master 1上的备份sql语句

mysql> flush tables with read lock;

Query OK, 0 rows affected (0.01 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 512 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)

mysql> unlock tables;

Query OK, 0 rows affected (0.00 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 512 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)

备份sql语句

[root@rsync bak]# mysqldump -uroot -p bbs -B |gzip >./bbs_B.sql.gz

授权用户

grant replication slave on *.* to 'slave'@'10.0.0.%' identified by '123';

在Master 2 上从库设置 2)导入sql语句

[root@apache ~]# mysql -uroot -p mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.104', MASTER_PORT=3306, MASTER_USER='slave',master_password='123',MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=512;

这样 Master 2 到Master 1的从库就完成了

3)设置Master 1 到Master 2的从库

mysql> flush tables with read lock;

Query OK, 0 rows affected (0.01 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 513 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 513 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)

mysql> unlock tables;

Query OK, 0 rows affected (0.00 sec)

grant replication slave on *.* to 'slave'@'10.0.0.%' identified by '123';

然后在Master 1 上重复上面操作

mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.102', MASTER_PORT=3306, MASTER_USER='slave',master_password='123',MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=513;

Query OK, 0 rows affected (0.03 sec)

这里注意下,两端的POS点可能不一样。为了避免多余的复制,我们还忽略授权表,或者只复制这一个库。(这里即使我上面指定的*.*,后来发现并没有同步授权表)

在Master 1上查看

[root@rsync ~]# mysql -uroot -p -e \"show slave status\\G\"|grep Yes Enter password:

Slave_IO_Running: Yes Slave_SQL_Running: Yes

在Master 2上查看

[root@apache ~]# mysql -uroot -p -e \"show slave status\\G\"|grep Yes Enter password:

Slave_IO_Running: Yes Slave_SQL_Running: Yes

5.配置keepalived实现mysql双主的高可用

1)配置安装keepalived

[root@rsync keepalived]# ln -s /usr/src/kernels/2.6.18-194.el5-i686/ /usr/src/linux/ tar zxf keepalived-1.1.17.tar.gz cd keepalived-1.1.17 ./configure make make install

2)规范配置,复制文件

[root@rsync keepalived]# \\cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/ [root@rsync keepalived]# \\cp /usr/local/sbin/keepalived /usr/sbin/ [root@rsync keepalived]# mkdir /etc/keepalived/

[root@rsync keepalived]# \\cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ [root@rsync keepalived]# /etc/keepalived/

\\cp

/usr/local/etc/keepalived/keepalived.conf

3)测试启动

[root@rsync keepalived]# ps -ef |grep keep

root 9524 1 0 18:11 ? 00:00:00 keepalived -D root 9525 9524 0 18:11 ? 00:00:00 keepalived -D root 9526 9524 0 18:11 ? 00:00:00 keepalived -D root 9735 6476 0 18:20 pts/0 00:00:00 grep keep

在另一边执行相同的操作

4)给出两端的配置文件

主keepalived配置

Keepalived.conf global_defs {

notification_email { xinpengbj@163.com }

notification_email_from 748152983@qq.com smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_xp_2

}

vrrp_instance VI_1 { state MASTER

interface eth0 绑定的网卡 virtual_router_id 92

priority 100 主上面的优先级要高于从的优先级 advert_int 1 authentication { auth_type PASS

auth_pass 0229 使用的密码 }

virtual_ipaddress { 10.0.0.253 VIP的地址 } }

virtual_server 10.0.0.253 3306 { vip上的监控的端口 delay_loop 6 lb_algo rr lb_kind DR

nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP

real_server 10.0.0.102 3306 { weight 1

notify_down /server/script/kill_keepalived.sh 如果mysql宕机了,执行的脚本 TCP_CHECK {

connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } }

Kill脚本的内容

[root@apache ~]# cat /server/script/kill_keepalived.sh #!/bin/sh

/etc/init.d/keepalived stop real_server 两端都只有一台主机

这里也有一个问题,就是网上一些人说的 keepalived两端都是备机状态,不抢占vip。 这个后来想了想,看需求,如果的确是需要主宕机恢复后,仍能切回去,就不要让它抢占vip,反之咋根据上面配置即可(可以在优先级较高的一边加入nopreempt)

备keepalived 的配置

[root@rsync data]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived

global_defs {

notification_email { xinpengbj@163.com }

notification_email_from 748152983@qq.com smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_xp_1 }

vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 92 priority 50 advert_int 1 nopreempt authentication { auth_type PASS auth_pass 0229 }

virtual_ipaddress { 10.0.0.253 }

}

virtual_server 10.0.0.253 3306 { delay_loop 6 lb_algo rr lb_kind DR

nat_mask 255.255.255.0 protocol TCP

real_server 10.0.0.104 3306 { weight 1 TCP_CHECK {

connect_timeout 8 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } }

6.两边的数据状态

1)主keepalived 状态

[root@apache ~]# ipvsadm -L -n --stats

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes -> RemoteAddress:Port

TCP 10.0.0.253:3306 9 306 0 31963 0 -> 10.0.0.102:3306 9 306 0 31963 0

2)从keepalived 状态

[root@rsync data]# ipvsadm -L -n --stats IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes -> RemoteAddress:Port

TCP 10.0.0.253:3306 0 0 0 0 0

-> 10.0.0.104:3306 0 0 0 0 0

7.测试网站效果

首先我们先测试正常访问下的web

此时我们停掉Master 1 上的数据库,此时主keepalived上面的数据,

[root@apache ~]# ip add|grep 253

inet 10.0.0.253/32 scope global eth0 [root@apache ~]# /etc/init.d/mysqld stop Shutting down MySQL...[ OK ] [root@apache ~]# ip add|grep 253 [root@apache ~]#

VIP 已经没有了

在备keepalived上面检查

[root@rsync data]# ip add|grep 253 inet 10.0.0.253/32 scope global eth0 VIP已经跳转过来

客户端验证

此时我们在网页上发帖子

可以看到两个数据库之间的数据是同步的 。

恢复主库测试

此时,我们启动主keepalived上面的数据库,然后启动Keepalived服务

[root@apache ~]# /etc/init.d/mysqld start Starting MySQL..[ OK ]

[root@apache ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@apache ~]# ip add|grep 253 inet 10.0.0.253/32 scope global eth0

此时在在网站上测试

8.测试中遇到的问题

问题1

这个在keepalived VIP漂移的时候,两边都出现的问题

Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062

Last_Error: Error 'Duplicate entry '1' for key 'PRIMARY'' on query. Default database: 'bbs'. Query: 'INSERT INTO pre_common_onlinetime SET `uid`='1' , `thismonth`='10' , `total`='10' , `lastupdate`='1352792067''

问题描述

重复insert 相同primary key 导致 slave SQL thread终止。而Slave I/O thread 正常运行。

如果日志中出现了这样代码,可能是错误的select,或update操作,master是跳过这些操作,但是被记录到了二进制日志中,slave会依据二进制中的语句做相同的动作,就会报错,

知道原理了要做的操作就很简单了

网上找的办法

编辑my.cnf [mysqld]

slave_skip_errors = 1062

问题2

主keepalived重启后,从keepalived数据库问题

问题出现

Slave_IO_Running: No

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file' 错误日志

121112 23:48:16 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000013' at position 2218626, relay log './rsync-relay-bin.000002' position: 2218365

121112 23:48:16 [Note] Slave I/O thread: connected to master 'slave@10.0.0.102:3306',replication started in log 'mysql-bin.000013' at position 2218626

121112 23:48:16 [ERROR] Error reading packet from server: Could not find first log file name in binary log index file ( server_errno=1236)

121112 23:48:16 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file', Error_code: 1236

解决办法

网上有人说需要重新指定文件位置,POS点 方法步骤

方法1 (此步骤未执行来源 http://www.linuxidc.com/Linux/2012-02/54729.htm)

①mysqlbinlog --start-position=627655136 /data/mysql/binlog/mysql-bin.000288 ②mysqlbinlog /data/mysql/binlog/mysql-bin.000288 > test.txt ③less 查看最近一次执行正确的位置点

③mysql> change master to master_log_file='mysql-bin.000288',master_log_pos=627625631; 重新设置文件pos点 ④mysql> start slave;

方法2

但我修改了

change master to MASTER_HOST='10.0.0.102',MASTER_USER='slave',MASTER_PASSWORD='123';

然后重启了slave 的就可以了 问题3

全部都显示正常,但数据不同步

121113 21:25:01 [Note] Error reading relay log event: slave SQL thread was killed 121113 21:25:01 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013)

Mysql +lvs+keepalived实现mysql的读高可用

主机规划

Mysql slave 1 Mysql slave 2 Lvs+keepalived主 Lvs+keepalived备 Apache 主机地址 10.0.0.106 10.0.0.107 10.0.0.130 10.0.0.103 10.0.0.120 主机用途 Mysql Slave Mysql Slave 调度器 调度器 Web服务器 VIP地址 10.0.0.250 10.0.0.250 10.0.0.250 1.配置从库跟主库同步

因为前面已经备份过,所以我就直接将数据导入从库里面 这里如果这样做可能会导致丢失一部分数据(可能是我操作有误,但的确丢了一些数据) 建议 重新备份

在Slave 1上操作

[root@nginx bak]# mysql -uroot -p<104.sql Logging to file '/tmp/mysql.history' Enter password:

mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.104', MASTER_PORT=3306, MASTER_USER='slave',master_password='123',MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=512;

Query OK, 0 rows affected (0.05 sec)

mysql> start slave ;

Query OK, 0 rows affected (0.00 sec)

在Slave 2 上操作

[root@MYSQL bak]# mysql -uroot -p<104.sql Logging to file '/home/mysql.history' Enter password: mysql>

CHANGE

MASTER

TO

MASTER_HOST='10.0.0.104',

MASTER_PORT=3306,

MASTER_USER='slave',master_password='123',MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=512;

Query OK, 0 rows affected (0.12 sec)

mysql> start slave ;

Query OK, 0 rows affected (0.00 sec)

然后分别查看状态 查看Slave 1状态

[root@nginx bak]# mysql -uroot -p -e\"show slave status\\G;\"|grep -i Yes Enter password:

Slave_IO_Running: Yes Slave_SQL_Running: Yes

查看Slave2状态

[root@MYSQL bak]# mysql -uroot -p -e\"show slave status\\G;\"|grep -i Yes Enter password:

Slave_IO_Running: Yes Slave_SQL_Running: Yes

2.编写一个检查从库状态的脚本(用来检测从库状态)

因为脚本是通过在lvs+keepalived主机上操作检查Mysql Slave 状态的。所以我们要分别在Slave端进行授权(尽量使授权的用户和密码都一致) grant all on *.* to 'check'@'10.0.0.%' identified by '123';

脚本内容(别忘了加执行权限)

[root@APACHE script]# cat /server/script/check_slave.sh

# by XinPeng # QQ 748152983

export PATH=$PATH:/server/shell user=\"check\" passwd=\"123\" host=${1}

query=\"show slave status\\G;\" mysqlCmd=\"/usr/local/mysql/bin/\"

${mysqlCmd}/mysql -u${user} -p${passwd} -h${host} -e \"\\s\">/dev/null 2>&1 [ $? -eq 0 ]||exit 1

array=($(${mysqlCmd}/mysql -u${user} -p${passwd} -h${host} -e \"${query}\"|egrep -i

\"Running|Seconds_Behind_Master\"|awk '{print $2}')) if [ \"${array[0]}\" == \"No\" -o \"${array[1]}\" == \"No\" ] then exit 1 fi

if [ \"${array[2]}\" -gt 120 ] then exit 1 fi

3.安装配置lvs+keepalived

root@RSYNC ipvsadm-1.24 06:41:18 # ln -s /usr/src/kernels/2.6.18-194.el5-i686/ /usr/src/linux

root@RSYNC ipvsadm-1.24 06:40:12 # tar zxf ipvsadm-1.24.tar\\(1\\).gz root@RSYNC ipvsadm-1.24 06:41:12 # cd ipvsadm-1.24 root@RSYNC ipvsadm-1.24 06:41:15 # make

root@RSYNC ipvsadm-1.24 06:41:16 # make install root@RSYNC ipvsadm-1.24 06:41:29 # ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn root@RSYNC ipvsadm-1.24 06:42:03 # lsmod |grep ip_vs ip_vs 78081 0

规范keepalived

root@RSYNC keepalived-1.1.17 06:44:27 # cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/

root@RSYNC keepalived-1.1.17 06:45:57 # cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/

root@RSYNC keepalived-1.1.17 06:47:29 # cp /usr/local/etc/keepalived/keepalived.conf /etc/ke

root@RSYNC keepalived-1.1.17 06:47:45 # mkdir /etc/keepalived

root@RSYNC keepalived-1.1.17 06:47:51 # cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/

root@RSYNC keepalived-1.1.17 06:47:59 # cp /usr/local/sbin/keepalived /usr/sbin/

下面是配置文件

[root@APACHE script]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc }

notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_db1 }

vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 121 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 5258 }

virtual_ipaddress { 10.0.0.252 } }

virtual_server 10.0.0.250 3306 { delay_loop 6 lb_algo rr lb_kind DR

nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP

real_server 10.0.0.106 3306 { weight 1 MISC_CHECK {

misc_path \"/server/script/check_slave.sh 10.0.0.106\" misc_dynamic } }

real_server 10.0.0.107 3306 { weight 1 MISC_CHECK {

misc_path \"/server/script/check_slave.sh 10.0.0.107\" misc_dynamic } } }

黄色部分请重点注意,括号不能和字母靠一起,括号不能多也不能少,否则keepalived即使显示正常启动也无法实现自动剔除和加入功能。 请不要忘记下面的MYsql slave上配置lo 地址

4.测试keepalived的健康检查功能 在Slave 1上面关闭同步开关

mysql> system ifconfig eth0;

eth0 Link encap:Ethernet HWaddr 00:0C:29:89:45:B9

inet addr:10.0.0.107 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe89:45b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3047874 errors:0 dropped:0 overruns:0 frame:0 TX packets:2706769 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000

RX bytes:337147696 (321.5 MiB) TX bytes:233725209 (222.8 MiB) Interrupt:59 Base address:0x2000

mysql> stop slave ;

Query OK, 0 rows affected (0.00 sec)

在Keepalived上面检查

[root@APACHE script]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.250:3306 rr persistent 50

-> 10.0.0.106:3306 Route 1 0 0

再开启主从同步开关

[root@APACHE script]# ipvsadm -L -n

IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.252:3306 rr persistent 50

-> 10.0.0.107:3306 Route 1 0 0 -> 10.0.0.106:3306 Route 1 0 0

但此时我们连接VIP地址是不能连接的状态如下

root@RSYNC shell 14:41:05 # ping 10.0.0.250 PING 10.0.0.252 (10.0.0.250) 56(84) bytes of data. 64 bytes from 10.0.0.250: icmp_seq=1 ttl=64 time=1.15 m 究其原因:

Lvs dr模式原理 ,在解包的过程中,如果本地没这个Ip 就会直接被丢弃

配置keepalived 的高可用

备机修改router_id priority state MASTER 不与主keepalived 相同即可

5.在Mysql slave端配置

在slave 上的lo (本地回环地址)配置vip 在Slave 1 和Slave 2 上面 分布绑定VIP 地址

ifconfig lo:252 10.0.0.252 netmask 255.255.255.255 up

抑制ARP响应

echo \"1\" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo \"2\" >/proc/sys/net/ipv4/conf/lo/arp_announce echo \"1\" >/proc/sys/net/ipv4/conf/all/arp_ignore echo \"2\" >/proc/sys/net/ipv4/conf/all/arp_announce

相关参数理解(个人理解:,可能不准确)

arp_ignore

echo \"1\" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo \"1\" >/proc/sys/net/ipv4/conf/all/arp_ignore

定义对目标地址为本地IP的ARP询问不同的应答模式

1即为 只回应,访问网络地址是本地地址的arp 请求

1.所以前端lvs把用户访问数据包,丢过来的时候 2.因为用户访问的地址是VIP地址

3.所以本地网卡eth0 和eth1 因为查看到用户请求的地址,并不是本机的地址所以不会响应 4.而lo网卡上面,因为绑定了vip地址,所以当用户请求到达的时候,本地的lo网卡就会响应,所以Lo网卡处理完成后,就会把数据重新封装,然后丢出去 ,通过外网口,出去

arp_announce

echo \"2\" >/proc/sys/net/ipv4/conf/lo/arp_announce echo \"2\" >/proc/sys/net/ipv4/conf/all/arp_announce

对网络接口上,本地IP地址的发出的,ARP回应,作出相应级别的限制: 确定不同程度的限制,宣布对来自本地源IP地址发出Arp请求的接口

2 代表会查询本地最适当的地址,进行响应

1当用户一个请求到达,lvs通过分发算法丢给web服务器

2.此时,请求地址是vip地址,web服务器设置2参数后,不会理会源地址是谁。

3Web服务器,会在本机的所有网卡上找,这个包的目标地址,即本地lo上的vip地址,如果找到就处理,再封装,返回给用户

4如果没找到,就会通过本地的网卡,在内网中发送arp包,找到能够响应的网络接口,把数据包丢给能响应的地址

6.可行性测试

1)对两个从库进行锁表设置

mysql> use bbs; Database changed

mysql> flush tables with read lock; Query OK, 0 rows affected (0.00 sec)

很遗憾的一点,如果你这样做你的网站可能将不能访问,

2)回到web服务器上

[root@APACHE script]# cat /etc/hosts

# Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 APACHE

::1 localhost6.localdomain6 localhost6

10.0.0.250 mysql.master.vip 这里将web服务器访问的网站指向我们的slave集群的VIP 252

#10.0.0.250 mysql.slave.vip

访问web网页测试

所以没办法 我只有这样登录,测试了

测试关闭Slave 2上面的 slave 开关

mysql> system ifconfig eth0;

eth0 Link encap:Ethernet HWaddr 00:0C:29:89:45:B9

inet addr:10.0.0.107 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe89:45b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3226880 errors:0 dropped:0 overruns:0 frame:0 TX packets:2788711 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000

RX bytes:364100066 (347.2 MiB) TX bytes:251419886 (239.7 MiB) Interrupt:59 Base address:0x2000

mysql> stop slave;

Query OK, 0 rows affected (0.00 sec)

然后在keepalived上面进行查看

root@RSYNC keepalived-1.1.17 22:37:50 # ipvsadm -L -n --stats

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes -> RemoteAddress:Port

TCP 10.0.0.250:3306 0 0 0 0 0 -> 10.0.0.106:3306 0 0 0 0 0

重新开启slave 功能

mysql> start slave;

Query OK, 0 rows affected (0.00 sec)

检查是否自动加入

root@RSYNC keepalived-1.1.17 22:38:33 # ipvsadm -L -n --stats IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes -> RemoteAddress:Port

TCP 10.0.0.250:3306 0 0 0 0 0 -> 10.0.0.107:3306 0 0 0 0 0 -> 10.0.0.106:3306 0 0 0 0 0

整个过程中,用户登录网页,不会产生需要重新登录的情况(测试环境)

演示高可用效果

当掉主keepalived的 和后端的一个Slave 数据库

root@RSYNC keepalived-1.1.17 22:46:29 # /etc/init.d/keepalived stop 停止 keepalived:[确定] 数据库操作

mysql> stop slave;

Query OK, 0 rows affected (0.00 sec) 在从机上检查

root@NFS ipvsadm-1.24]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.250:3306 rr persistent 50

-> 10.0.0.106:3306 Route 1 0 0 slave开关停掉的机器也自动当掉

[root@NFS ipvsadm-1.24]# ip add |grep 250 [root@NFS ipvsadm-1.24]# ip add |grep 250

inet 10.0.0.250/32 scope global eth0VIP地址已自动接管

用户端登录测试

也可以正常使用

使用nginx +keepalived搭建负载均衡

架构分析

使用nginx做七层分发,使用keepalived做VIP,做反向代理高可用,同时对后面的Apache服务器进行健康检查,保证整个业务的高可用

优点

适用于访问量不大的业务,维护简单。正则分发强大,可调度性强

可以把由nginx处理静态数据,动态的留给后端(如果负载均衡器压力不大,资源有余)

缺点

数据都经过nginx ,访问量增大的时候一般会成为瓶颈。

需要注意的地方

Nginx做负载均衡,如果没必要可以降低记录的日志的级别 如果后端web服务器需要记录真实的IP地址,请配置相应的模块

Nginx 如果需要健康检查下面的服务器,就不能使用ip_hash这种算法,session就会是一个问题,当然我们也可以借助第三方模块去弥补 根据需求

主机规划 Nginx+keepalived 主 Nginx+keepalived 主 Apache服务器 Apache 服务器 主机IP地址 10.0.0.102 10.0.0.130 10.0.0.104 10.0.0.103 主机用途 Nginx 做七层负载均衡 Nginx 做七层负载均衡 Web服务器 Web服务器 Vip 地址 10.0.0.251 10.0.0.251 1.配置nginx实现负载均衡功能

user nginx nginx; worker_processes 1; #worker_cpu_affinity ;

error_log /usr/local/nginx/logs/nginx_error.log crit; pid /usr/local/nginx/logs/nginx.pid;

worker_rlimit_nofile 51200; events {

use epoll;

worker_connections 51200; } http {

include mime.types;

default_type application/octet-stream;

server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 128k; sendfile on; tcp_nopush on;

keepalive_timeout 600; tcp_nodelay on; server_tokens off;

open_file_cache max=65535 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 1;

log_format access '$remote_addr - $remote_user [$time_local] $request ' '\"$status\" $body_bytes_sent \"$http_referer\" ' '\"$http_user_agent\" \"$http_x_forwarded_for\" '; access_log /usr/local/nginx/logs/nginx_access.log access; client_max_body_size 10M; client_body_buffer_size 128k; proxy_connect_timeout 600; proxy_read_timeout 600; proxy_send_timeout 600; proxy_buffer_size 16k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;

upstream www.a.com{

server 10.0.0.104:80 weight=1 max_fails=2 fail_timeout=30s;

server 10.0.0.103:80 weight=1 max_fails=2 fail_timeout=30s; } server {

listen 80;

server_name www.a.com; location / {

proxy_redirect off;

proxy_set_header Host $host;

proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://www.a.com; }

location /nginx_status{ stub_status on;

access_log off; allow 10.0.0.101; deny all; } } }测试访问

检测nginx运行的脚本

#!/bin/sh while true do

nginxProcess=` ps -ef |grep nginx|grep -v grep |wc -l` if [ \"$nginxProcess\" -eq 0 ] then

/application/nginx-1.2.4/sbin/nginx sleep 5

nginxProcess=` ps -ef |grep nginx|grep -v grep |wc -l` if [ \"$nginxProcess\" -eq 0 ] then

/etc/init.d/keepalived stop 如果无法启动就停掉keepalived fi fi sleep 5 Done

同样在另一台nginx上配置

2.然后配置keepalived 这里我们配置双主架构

配置文件

[root@apache keepalived]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived

global_defs {

notification_email { xinpengbj@163.com }

notification_email_from 748152983@qq.com smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_nginx_1 }

vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 112

priority 90 advert_int 1 authentication { auth_type PASS auth_pass 0229 }

virtual_ipaddress { 10.0.0.111 } }

vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 151 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 0229 }

virtual_ipaddress { 10.0.0.222 } } ~

在另一端修改响应的状态 优先级 和 router_id

3.测试keepalived是否正常

然后重启 查看状态 两边都一个主的IP

[root@apache keepalived]# ip add |egrep '111|222' inet 10.0.0.111/32 scope global eth0

另一端

root@RSYNC keepalived 00:58:12 # ip add |egrep '111|222' inet 10.0.0.222/32 scope global eth0

切换测试

[root@apache keepalived]# /etc/init.d/keepalived stop ^[[Aping keepalived: [ OK ]

[root@apache keepalived]# ip add |egrep '111|222' [root@apache keepalived]#

然后观察另一端

root@RSYNC keepalived 01:00:21 # ip add |egrep '111|222' inet 10.0.0.222/32 scope global eth0 inet 10.0.0.111/32 scope global eth0

启动则另一端

[root@apache keepalived]# /etc/init.d/keepalived start Starting keepalived: [ OK ]

[root@apache keepalived]# ip add |egrep '111|222' inet 10.0.0.111/32 scope global eth0

另一端IP变化

root@RSYNC keepalived 01:00:54 # ip add |egrep '111|222' inet 10.0.0.222/32 scope global eth0 root@RSYNC keepalived 01:02:08 #

4.测试集群整体高可用性

修改本机hosts 10.0.0.111 www.a.com

网页访问

停掉主keepalived 查看是否正常

并不影响正常访问

再停掉103. 的服务器查看是否正常

可以看到跳转到104

这里有一个问题,我编译安装的apache需要做域名映射,才能访问的到(配置一个虚拟主机)

至于双主的那种架构

就是为我们的域名在dns 解析中 配置多个IP地址解析到同一域名 采用dns轮训的方式 10.0.0.111 www.a.com 10.0.0.222 www.a.com

手动安装配置lvs 检查是否安装

[root@nginx tools]# lsmod |grep ip_vs [root@nginx tools]# cat /etc/redhat-release CentOS release 5.5 (Final)

[root@nginx tools]# uname -r 2.6.18-194.el5

[root@nginx tools]# ln -s /usr/src/kernels/2.6.18-194.el5-i686 /usr/src/linux

Ln 前面的路径要和Uname -r输出的结果版本对应,工作中可能有多个路径。 如果发现包不存在 则yum install kernel-devel -y

Ln这个命令也可以不执行在后面内核编译的时候执行

Make make install

tar zxf ipvsadm-1.24.tar.gz cd ipvsadm-1.24 make

make install

lsmod |grep ip_vs

[root@nginx ipvsadm-1.24]# /sbin/ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@nginx ipvsadm-1.24]# lsmod |grep ip_vs ip_vs 78081 0

执行ipvsadm进行加载

安装ipvsadm错误

在ipvsadm安装过程中出现以下错误,导致一直无法安装,在网上搜索了很久也没找到解决方法。 # make

make -C libipvs

make[1]: Entering directory `/var/tmp/ipvsadm-1.24/libipvs'

gcc -Wall -Wunused -Wstrict-prototypes -g -O2 -I/usr/src/linux/include -DHAVE_NET_IP_VS_H -c -o libipvs.c:79: error: dereferencing pointer to incomplete type 。。。 。。。。

libipvs.c:309: error: dereferencing pointer to incomplete type

libipvs.c:315: error: `IP_VS_SO_GET_DAEMON' undeclared (first use in this function) libipvs.c: At top level:

libipvs.c:33: error: storage size of `ipvs_info' isn't known

libipvs.c:132: error: `IP_VS_SO_SET_DELDEST' undeclared (first use in this function)make[1]: *** [libipvs.o] Error 1make[1]: Leaving directory `/var/tmp/ipvsadm-1.24/libipvs'make: *** [libs] Error 2

最后在官网的wiki上发现原来是无法在/usr/src/linux下找到内核源码,所以无法安装。 [root@localhost /]# cd /usr/src [root@localhost src]# mkdir linux

[root@localhost src]# rpm -ivh kernel-devel-2.6.18-164.el5.i686.rpm #/usr/src目录下出现kernels目录

[root@localhost src]# cd kernels [root@localhost kernels]# ls

2.6.18-164.el5-i686这个目录就是源码的位置,下边做一个链接

画了两张图 算是自己的一些理解

Keepalived工作流程

Lvs+keepalived架构数据流向

因篇幅问题不能全部显示,请点此查看更多更全内容