用户空间管理工具
ipvsadm安装
yum安装
[root@host2 ~]# yum install -y ipvsadm
[root@host2 ~]# rpm -ql ipvsadm
/etc/sysconfig/ipvsadm-config
#服务脚本参数文件
/usr/lib/systemd/system/ipvsadm.service
#服务脚本,开机,加载生效,ipvs规则文件中规则的,文件是:/etc/sysconfig/ipvsadm
/usr/sbin/ipvsadm
#规则管理工具
/usr/sbin/ipvsadm-restore
#规则恢复工具
/usr/sbin/ipvsadm-save
#规则保存工具
/usr/share/doc/ipvsadm-1.27
/usr/share/doc/ipvsadm-1.27/README
/usr/share/man/man8/ipvsadm-restore.8.gz
/usr/share/man/man8/ipvsadm-save.8.gz
/usr/share/man/man8/ipvsadm.8.gz
编译安装
1,下载源码包
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
2,安装依赖包,共6个
[root@host3 ipvsadm-1.26]# yum install -y libnl* popt*
...
Installed:
libnl.x86_64 0:1.1.4-3.el7 libnl-devel.x86_64 0:1.1.4-3.el7
libnl3-devel.x86_64 0:3.2.28-4.el7 libnl3-doc.x86_64 0:3.2.28-4.el7
popt-devel.x86_64 0:1.13-16.el7 popt-static.x86_64 0:1.13-16.el7
3,两步走
[root@host3 ipvsadm-1.26]# make && make install
4,生成的3个程序文件,1个服务脚本
和yum安装的相同,少了帮助文档,不影响
[root@host3 ipvsadm-1.26]# ll /usr/sbin/ipvs*
-rwxr-xr-x 1 root root 104960 Sep 7 16:36 /usr/sbin/ipvsadm
-rwxr-xr-x 1 root root 621 Sep 7 16:36 /usr/sbin/ipvsadm-restore
-rwxr-xr-x 1 root root 791 Sep 7 16:36 /usr/sbin/ipvsadm-save
[root@host3 ipvsadm-1.26]# ll /etc/init.d/ipvsadm
-rwxr-xr-x 1 root root 2423 Sep 7 16:36 /etc/init.d/ipvsadm
ipvsadm语法
[root@host2 ~]# man ipvsadm
[root@host2 ~]# ipvsadm --help
man手册更详细些
[root@host2 ~]# ipvsadm -h
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
Usage:
ipvsadm -A|E -t|u|f service-address [-s scheduler] [-p [timeout]] [-M netmask] [--pe persistence_engine] [-b sched-flags]
# 添加、修改,tcp|udp|防火墙标记集群,对外地址,调度算法
# eg:ipvsadm -A -t 192.168.80.100:80 -s rr
ipvsadm -D -t|u|f service-address
# 删除一个对外服务
ipvsadm -C # 删除
ipvsadm -R # 从文件恢复,等同于ipvsadm-restore
ipvsadm -S [-n] # 规则导出到文件,等同于ipvsadm-save
ipvsadm -a|e -t|u|f service-address -r server-address [options]
# 从某一个对外服务,添加或编译一个后端节点
# eg: ipvsadm -a -t 192.168.80.100:80 -r 192.168.80.101:80 -m
ipvsadm -d -t|u|f service-address -r server-address
从某个对外服务中,删除一个后端节点
ipvsadm -L|l [options]
列出当前信息
ipvsadm -Z [-t|u|f service-address]
ipvsadm --set tcp tcpfin udp
ipvsadm --start-daemon state [--mcast-interface interface] [--syncid sid]
ipvsadm --stop-daemon state
ipvsadm -h
...
重点选项,参数
service-address 是lvs对外放开的服务,ip:port
server-address 是后端节点的地址,ip:port
service-address包含一个或多个server-address
A 添加
E 修改
C 删除
大写操作的是service-address
a 添加
e 修改
c 删除
小写操作对象是server-address
指定调度算法的选项:10选1
--scheduler -s scheduler one of rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq,
指定lvs工作模式的选项,三选一:
--gatewaying -g gatewaying (direct routing) (default)
--ipip -i ipip encapsulation (tunneling)
--masquerading -m masquerading (NAT)
g 默认的直接路由,vs/dr
i ip隧道,vs/tun
m nat网络地址转换,vs/nat
FWM多集群绑定
firewall mask,可以给特定的报文流量,打上指定的标签;定义ipvs集群规则时,可以引用该标签,多种报文打上相同的标签,这样引用一个标签,就实现了多个集群的调度规则绑定;
1、实现httpd的ssl加密
[root@rs1 ~]# !rpm
rpm -ql mod_ssl
/etc/httpd/conf.d/ssl.conf
/etc/httpd/conf.modules.d/00-ssl.conf
/usr/lib64/httpd/modules/mod_ssl.so
/usr/libexec/httpd-ssl-pass-dialog
/var/cache/httpd/ssl
[root@rs1 ~]# yum install -y mod_ssl
# 安装mod_ssl模块,利用其包含的配置文件,模块,自签名证书,实现httpd的ssl加密
# 安装模块后,直接重启,即可
[root@router ~]# curl -k https://192.168.80.102
rs1:192.168.80.102
[root@router ~]# curl -k https://192.168.80.103
rs2:192.168.80.103
# 浏览器访问
2、没有fwm情况下
[root@client ~]# curl -k https://192.168.80.200:443
rs1:192.168.80.102
[root@client ~]# curl -k https://192.168.80.200:443
rs2:192.168.80.103
[root@client ~]# curl 192.168.80.200
rs2:192.168.80.103
[root@client ~]# curl 192.168.80.200
rs1:192.168.80.102
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.200:80 wlc
-> 192.168.80.102:80 Route 1 0 0
-> 192.168.80.103:80 Route 1 0 0
TCP 192.168.80.200:443 wlc
-> 192.168.80.102:443 Route 1 0 0
-> 192.168.80.103:443 Route 1 0 0
# 客户端访问80和443的http以及https,分别需要在lvs上配置2套集群规则,
# 因为dr模型下,端口无法做映射,客户端访问http的80,只能转给rs的80,同样访问https的443,只能转给rs的443,
# 因此必须配置2套集群调度规则
3、利用FireWallMask实现多集群绑定
# 针对上述请况,可以根据iptables在不同的报文上打上相同的标签,
# 在ipvs配置集群时,引用此标签,就实现了多个目标端口集群的绑定,用一套ipvs规则即可实现
# 打标签语法如下
# mangle表的prerouting链,Number是数字型的标签,自定义
iptables -t mangle -A PREROUTING -d $vip -p $proto --dport $port -j MARK --set-mark NUMBER
# 对http和https2个端口的流量打上9的标签
[root@lvs ~]# iptables -t mangle -A PREROUTING -d 192.168.80.200 -p tcp --dport 80 -j MARK --set-mark 9
[root@lvs ~]# iptables -t mangle -A PREROUTING -d 192.168.80.200 -p tcp --dport 443 -j MARK --set-mark 9
# 定义对外集群服务时,引用标签9定义
[root@lvs ~]# ipvsadm -A -f 9
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 9 wlc
# 添加后端节点,只需要ip即可,端口不用特意指定,因为dr的端口一定是一一对应,后端服务端口确保80和443端口监听即可
[root@lvs ~]# ipvsadm -a -f 9 -r 192.168.80.102
[root@lvs ~]# ipvsadm -a -f 9 -r 192.168.80.103
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 9 wlc
-> 192.168.80.102:0 Route 1 0 0
-> 192.168.80.103:0 Route 1 0 0
4、客户端测试
[root@client ~]# while true;do curl 192.168.80.200;curl -k https://192.168.80.200;sleep 1;done
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
rs2:192.168.80.103
#看到,不管是http还是https,可以通过带标签的一套ipvs集群规则实现调度!
持久连接
在不考虑sh调度算法的情况下,lvs是对每连接进行调度的,在rr算法下,即便同一个客户端过来的连接,都会被轮询调度,这样不利于会话的保持,
因此,lvs设置了持久连接,无论使用哪种调度算法,哪怕算法采用的是wrr轮询,只要设置了持久连接,那么在一段时间内,默认360s,相同客户端再发来的连接,不会按照wrr调度,而是会调度到同一台rs上,方便会话保持,
port Affinity: 每端口持久:每个端口对应定义为一个集群服务,每集群服务单独调度; 每防火墙标记持久:基于防火墙标记定义集群服务;可实现将多个端口上的应用统一调度,即所谓的port Affinity; 每客户端持久:基于0端口定义集群服务,即将客户端对所有应用的请求统统调度至后端主机,必须定义为持久模式;
1、配置会话保持规则
[root@lvs ~]# ipvsadm -A -t 192.168.80.200:443
[root@lvs ~]# ipvsadm -E -t 192.168.80.200:443 -p
# 加个-p选项即可,默认是360s
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.200:443 wlc persistent 360
2、添加后端节点
[root@lvs ~]# ipvsadm -a -t 192.168.80.200:443 -r 192.168.80.102:443
[root@lvs ~]# ipvsadm -a -t 192.168.80.200:443 -r 192.168.80.103:443
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.200:443 wlc persistent 360
-> 192.168.80.102:443 Route 1 0 0
-> 192.168.80.103:443 Route 1 0 0
3、客户端访问,发现360s后,访问才变,
[root@client ~]# while true;do curl -k https://192.168.80.200;sleep 1;done
rs2:192.168.80.103
rs2:192.168.80.103
rs2:192.168.80.103
... 360s后
rs1:192.168.80.102
ipvs规则持久化
通过ipvsadm命令行配置的ipvs默认重启后会丢失,持久化需要保存到配置文件当中:过程如下:
1、保存到配置文件
ipvsadm -S /path/to/file
ipvsadm-save > /path/to/file
# 2者等效,文件路径,建议/etc/sysconfig/ipvsadm,在enable了ipvsadm服务脚本后,会自动读取该配置文件,实现ipvs开机加载规则
ipvsadm -R /path/to/file
ipvsadm-restore < /path/to/file
# 手动恢复,
2、enable ipvsadm的服务脚本
systemctl enable ipvsadm.service
3、重启后,发现规则仍然存在,
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.200:443 wrr
-> 192.168.80.102:443 Route 1 0 0
-> 192.168.80.103:443 Route 1 0 0
ldirectd健康检测
lvs默认不会监测后端节点的健康性,并进行动态的摘除,或恢复上线,因此需要结合其他工具,辅助完成后端节点的健康性监测,实现方法有:
- 自定义脚本,如脚本循环探测后端web的某页面,发现失败时,调用ipvsadm命令进行节点的摘除,恢复时,进行节点的重新上线
- ldirectd程序,Daemon to monitor remote services and control Linux Virtual Server. ldirectord is a daemon to monitor and administer real servers in a cluster of load balanced virtual servers. ldirectord typically is started from heartbeat but can also be run from the command line.
1、下载rpm包
[root@lvs ~]# wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/x86_64/ldirectord-3.9.6-0rc1.1.2.x86_64.rpm
[root@lvs ~]# yum localinstall -y ldirectord-3.9.6-0rc1.1.2.x86_64.rpm
会安装一些依赖包
2、包中文件
[root@lvs ~]# rpm -qpl ldirectord-3.9.6-0rc1.1.2.x86_64.rpm
warning: ldirectord-3.9.6-0rc1.1.2.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 17280ddf: NOKEY
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord # 配置文件
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/lib/systemd/system/ldirectord.service # 服务脚本
/usr/sbin/ldirectord # 主程序
/usr/share/doc/ldirectord-3.9.6
/usr/share/doc/ldirectord-3.9.6/COPYING
/usr/share/doc/ldirectord-3.9.6/ldirectord.cf # 配置模版
/usr/share/man/man8/ldirectord.8.gz
[root@lvs ~]# cp /usr/share/doc/ldirectord-3.9.6/ldirectord.cf /etc/ha.d/
[root@lvs ~]#
# 将模版配置文件,放在配置目录下
3、lvs准备sorry server页面
用于所有rs挂掉时,响应的错误页面,可以单独准备,也可以lvs自己准备
[root@lvs ~]# yum install -y httpd
[root@lvs ~]# echo "sorry server" > /var/www/html/index.html
[root@lvs ~]# systemctl start httpd
4、修改配置文件
[root@lvs ~]# vim /etc/ha.d/ldirectord.cf
...
virtual=192.168.80.200:80
real=192.168.80.102:80 gate
real=192.168.80.103:80 gate
fallback=127.0.0.1:80 gate
service=http
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
request="index.html"
# 在模版基础之上,修改即可,会根据信息,自动生成ipvs规则,
# 但lvs的vip仍需要手动配置
[root@lvs ~]# ipvsadm -C
[root@lvs ~]# ip addr add 192.168.80.200/32 dev lo:1
# 清除所有规则,lvs的vip还需要手动配置
[root@lvs ~]# systemctl start ldirectord
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.200:80 rr
-> 192.168.80.102:80 Route 1 0 3
-> 192.168.80.103:80 Route 1 0 15
# 启动ldirectd,查看生成的ipvs规则
5、访问测试
[root@rs1 ~]# systemctl stop httpd
# rs1停止之后,调度自动切换,只调度到rs2上
[root@client ~]# while true;do curl http://192.168.80.200;sleep 1;done
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
rs2:192.168.80.103
[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.80.200:80 rr
-> 192.168.80.103:80 Route 1 0 10
# ipvs规则,自动将rs1摘除
# 重新启动
# 2个都停止,这时sorry server就会提供服务;
rs1:192.168.80.102
rs2:192.168.80.103
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
rs2:192.168.80.103
rs2:192.168.80.103
rs2:192.168.80.103
rs2:192.168.80.103
rs2:192.168.80.103
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
sorry server
sorry server
sorry server