keepalived配置实现lvs高可用

keepalived配置实现lvs高可用

keepalived配置lvs单实例

拓扑图

实现lvs的dr网络拓扑

详见lvs dr模型配置...如下给出各个角色需要配置的步骤

1、router

  • 开启ip_forward
  • 有连接2个网段的路由

2、lvs

  • 手动配置出口路由
  • ipvs规则,vip设置,都由keepalived配置文件来配置

3、2个rs上

  • 配置vip
  • 配置arp2个内核参数
  • 配置出口路由
  • 准备httpd页面

4、客户端

  • 默认路由指向router一端

配置keepalived实现ipvs规则

1、配置keepalived.conf如下:

[root@lvs ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id lvs1
}
# 定义了报警邮件相关

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.200/32 dev eth0 label eth0:1
#        192.168.80.200/32 dev lo label lo:1

    }
}
# 定义了vvrp的一个实例, 以及一个vip,此vip会配置到eth0的别名接口eth0:1上,定义在lo接口vip不生效


virtual_server 192.168.80.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP

    real_server 192.168.80.102 80 {
        weight 1
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
    }
    real_server 192.168.80.103 80 {
        weight 1
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
}
}
# 在vip基础上,在80端口定义了一个集群服务,后端有个rs,对rs定义了tcpcheck监测

2、重启,查看是否生成了ipvs规则和,vip,

lvs上dr模型时,出口路由仍需要手动配置

[root@lvs ~]# service  keepalived restart

[root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.200:80 rr
  -> 192.168.80.102:80            Route   1      0          1         
  -> 192.168.80.103:80            Route   1      0          0      
  
  
[root@lvs ~]# ifconfig  -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.104  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::6247:1fa9:b7d7:84b9  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4ce0:11e4:5740:290c  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::eb02:a6b5:be84:952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:55:b3:ea  txqueuelen 1000  (Ethernet)
        RX packets 8295  bytes 692665 (676.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20337  bytes 1644898 (1.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:55:b3:ea  txqueuelen 1000  (Ethernet)
# 可以看到,vs规则和vip都已经生效

客户端访问测试

[root@client ~]# curl 192.168.80.200
rs2:192.168.80.103
[root@client ~]# curl 192.168.80.200
rs1:192.168.80.102
[root@client ~]# curl 192.168.80.200
rs2:192.168.80.103
[root@client ~]# curl 192.168.80.200
rs1:192.168.80.102

keepalived配置lvs高可用

​ 再增加一台节点,yum安装keepalived,实现2个keepalived+lvs节点的高可用,centos7上keepalived默认版本是1.3.5,另外一台为编译安装的1.2.9,实验时配置上是兼容的,vvrp通信也正常,实际中,最好是完全一致版本!

​ vip一定要配置在某个实际接口的别名接口上,如果vip配置在一个单独的接口上,该机器会始终获得vip,直到停机

拓扑图

​ 同lvs的dr模型实验部分

配置

1、增加一台keepalived+lvs主机

[root@host6 ~]# yum install -y keepalived
[root@host6 ~]# yum install -y ipvsadm
[root@host6 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
# yum直接安装keepalived,和ipvsadm工具,将默认配置文件备份

[root@lvs ~]# scp /etc/keepalived/keepalived.conf 192.168.80.105:/etc/keepalived/
# 从上步的lvs+keepalived主机上,将配置好的keepalived配置文件复制到新的keepalived+lvs主机

[root@host6 ~]# vim /etc/keepalived/keepalived.conf
[root@host6 ~]# cat /etc/keepalived/keepalived.conf
# 编辑配置文件,做一定修改
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id lvs2 # router_id要改成唯一的
}

vrrp_instance VI_1 { # vvrp实例名要一致,保持不变
    state BACKUP # 状态是backup
    interface eth0 
    virtual_router_id 51 # vrouter id同属于一个vrouter下的要一致
    priority 80 # 由于是backup角色,优先级调整为80,为另外一台100要低,如果是120,即便写了是backup角色,实际中仍会成为主
    
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    } # 认证密码要一致
    
# vip,vip之上某个端口的集群,集群下的rs都要保持一致,不用改变
    virtual_ipaddress {
        192.168.80.200/32 dev eth0 label eth0:1
#        192.168.80.200/32 dev lo label lo:1

    }
}

virtual_server 192.168.80.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP
.........
}



# 手动配置网关路由,否则切换到备节点时,因为没有回程路由,而导致无法访问
[root@host6 ~]# route add default gw 192.168.80.101

2、启动keepalived主机,启动后,ipvs规则自动生成,只是此时没有vip,所以不生效,同时2台节点会进行不断监测通信,以便主节点故障时,及时切换

3、高可用测试

停止主keepalived节点
[root@lvs ~]# service keepalived stop

# 备节点日志,显示提升为主节点
Sep 11 14:09:55 host6 systemd: Reloading.
Sep 11 14:09:55 host6 yum[1181]: Installed: ipvsadm-1.27-8.el7.x86_64
Sep 11 14:11:36 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) Transition to MASTER STATE
Sep 11 14:11:37 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) Entering MASTER STATE
Sep 11 14:11:37 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) setting protocol VIPs.
Sep 11 14:11:37 host6 Keepalived_vrrp[1148]: Sending gratuitous ARP on eth0 for 192.168.80.200
Sep 11 14:11:37 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 192.168.80.200

# 主节点日志显示,自己取掉了ip
Sep 11 14:11:36 lvs Keepalived[6251]: Stopping Keepalived v1.2.19 (09/10,2020)
Sep 11 14:11:36 lvs Keepalived_healthcheckers[6253]: Removing service [192.168.80.102]:80 from VS [192.168.80.200]:80
Sep 11 14:11:36 lvs Keepalived_healthcheckers[6253]: Removing service [192.168.80.103]:80 from VS [192.168.80.200]:80
Sep 11 14:11:36 lvs Keepalived_vrrp[6254]: VRRP_Instance(VI_1) sending 0 priority
Sep 11 14:11:36 lvs Keepalived_vrrp[6254]: VRRP_Instance(VI_1) removing protocol VIPs.


Sep 11 14:13:49 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 80
Sep 11 14:13:49 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) Entering BACKUP STATE
Sep 11 14:13:49 host6 Keepalived_vrrp[1148]: VRRP_Instance(VI_1) removing protocol VIPs.


# 此时看备节点host6,规则在,又拿到了vip,客户端仍然可以访问,且切换速度极快
[root@host6 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.200:80 rr
  -> 192.168.80.102:80            Route   1      0          0         
  -> 192.168.80.103:80            Route   1      0          0         
[root@host6 ~]# ifconfig  -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.105  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::6247:1fa9:b7d7:84b9  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4ce0:11e4:5740:290c  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::eb02:a6b5:be84:952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)
        RX packets 15842  bytes 11187512 (10.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10949  bytes 2750770 (2.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)

4、后端节点监测测试

# 客户端循环访问
# 然后再103上停止httpd服务
# 可以看到,无论此时角色是主,还是备,ipvs规则中,都会把103节点拿掉!
# 所以可以看出备节点,也会不断的监测后端节点状态


[root@host6 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.200:80 rr
  -> 192.168.80.102:80            Route   1      0          0  
  

  [root@lvs ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.200:80 rr
  -> 192.168.80.102:80            Route   1      0          57  
  
  
# 客户端访问,会有一刻中断,然后只被调度到现有正常的节点!
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
rs1:192.168.80.102
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
rs1:192.168.80.102
rs1:192.168.80.102
rs1:192.168.80.102

keepalived配置lvs双主高可用

​ 在keepalived.conf中,每个vvrp_instance配置段都可以配置vip,vvrp_instance配置段有多个,可以用2台keepalived节点,各配置2个vvrp_instance,组成2个vrouter ,分别做对方的主备,即互为主备!

拓扑图

配置过程

1、在上步实现了单个vip高可用的基础上,增加一个vvrp实例的配置,如下:

lvs1:

[root@lvs ~]# cat !$
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id lvs1
}

# 原因80.200的vip配置不变
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.200/32 dev eth0 label eth0:1
#        192.168.80.200/32 dev lo label lo:1

    }
}

# 增加一个vvrp_instance做80.210 vip的配置,
# 状态是backup,优先级80
# router id改成非51的整数,
# vip为80.210,配置在eth0的第二个别名接口

vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 59
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.210/32 dev eth0 label eth0:2


    }
}
virtual_server 192.168.80.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP

    real_server 192.168.80.102 80 {
        weight 1
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
    }
    real_server 192.168.80.103 80 {
        weight 1
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
}
}
# 为80.210的vip配置2个后端rs节点,
# 因机器受限,其实是用相同的机器,在8080端口启用了nginx

virtual_server 192.168.80.210 8080 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP

    real_server 192.168.80.102 8080 {
        weight 1
	TCP_CHECK {

 		connect_port 8080
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
    }
    real_server 192.168.80.103 8080 {
        weight 1
	TCP_CHECK {

 		connect_port 8080
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
}
}

lvs2:

[root@host6 ~]# cat !$
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id lvs2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.200/32 dev eth0 label eth0:1
#        192.168.80.200/32 dev lo label lo:1

    }
}

# 第二个vvrp instance,状态是主,优先级100
# routerid 59,
# vip80.210,配置在eth0的第二个别名接口

vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 59
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.210/32 dev eth0 label eth0:2


    }
}


virtual_server 192.168.80.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP

    real_server 192.168.80.102 80 {
        weight 1
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
    }
    real_server 192.168.80.103 80 {
        weight 1
	TCP_CHECK {

 		connect_port 80
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

	}
}
}


virtual_server 192.168.80.210 8080 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP

    real_server 192.168.80.102 8080 {
        weight 1
        TCP_CHECK {

                connect_port 8080
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

        }
    }
    real_server 192.168.80.103 8080 {
        weight 1
        TCP_CHECK {

                connect_port 8080
            delay_before_retry 3
            nb_get_retry 3
            connect_timeout 3

        }
}
}

2、在相同的2个rs节点上,安装nginx,监听在8080端口,当作另外一个集群

​ 略

3、rs上配置第二个vip,在lo的第二个别名接口

[root@rs1 yum.repos.d]# ip add add 192.168.80.210 dev lo:2

[root@rs2 yum.repos.d]# ip addr add 192.168.80.210 dev lo:2

4、双主的keepalived高可用测试

# 根据配置,2个keepalived节点都正常时,
# host6 有vip 80.210
# lvs 有vip 80.200

[root@host6 ~]# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.105  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::6247:1fa9:b7d7:84b9  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4ce0:11e4:5740:290c  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::eb02:a6b5:be84:952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)
        RX packets 33474  bytes 12524160 (11.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 36616  bytes 4650803 (4.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.210  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)


[root@lvs ~]# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.104  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::6247:1fa9:b7d7:84b9  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4ce0:11e4:5740:290c  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::eb02:a6b5:be84:952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:55:b3:ea  txqueuelen 1000  (Ethernet)
        RX packets 33658  bytes 2646960 (2.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 55176  bytes 4310822 (4.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:55:b3:ea  txqueuelen 1000  (Ethernet)


# 其中一个keepalived关闭,vip就会飘动到另外的节点
# 保证,2个vip的高可用
# 停止lvs上keepalived时,2个vip都跑到了host6上,反之亦然
[root@host6 ~]# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.105  netmask 255.255.255.0  broadcast 192.168.80.255
        inet6 fe80::6247:1fa9:b7d7:84b9  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::4ce0:11e4:5740:290c  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::eb02:a6b5:be84:952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)
        RX packets 33791  bytes 12545382 (11.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 37124  bytes 4686905 (4.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.200  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)

eth0:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.80.210  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:a5:67:42  txqueuelen 1000  (Ethernet)

5、访问测试

​ 2个keepalived节点实现了2个vip的高可用,只要有一台keepalived节点存活,都会把另外一台的节点上的vip接管过来,客户端对这2个vip的访问就不会受影响!同理可以增加vip提供更多服务;可以增加keepalived节点进一步增强高可用性

[root@client ~]# while true; do curl 192.168.80.210:8080;sleep 1; done
rs2:nginx
rs1:nginx
rs2:nginx
rs1:nginx
rs2:nginx
^C
[root@client ~]# while true; do curl 192.168.80.200;sleep 1; done
rs1:192.168.80.102
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
^C

遇到的问题

忘记,在2个rs上,配置第2个vip了!没法用vip封装回程报文,所以客户端一直不通!!!

sorry server和local rs

sorry server

​ 当后端节点都挂掉时,需要准备一个sorry server,用于向用户响应报错页面;

1、在某个vip的集群服务中,配置sorry server指令

virtual_server 192.168.80.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
#    persistence_timeout 50
    protocol TCP
         
        sorry_server 127.0.0.1 80
        # 可以指向一个专门的server,一般是keepalived自己
    real_server 192.168.80.102 80 {
        weight 1
        TCP_CHECK {
             
                connect_port 80
            delay_before_retry 3
            nb_get_retry 3 
            connect_timeout 3
    
        }
    }
...
systemctl restart keepalived

2、准备sorry server

​ 如果是高可用的情况下,每个keepalived节点都要准备sorry server,当前是主状态的keepalived上的sorry server生效,

如果是单独的sorry server,那么所有keepalived都指向它即可

yum install -y httpd
echo sorry-server > /var/www/html/index.html
systemctl start httpd

3、访问测试

[root@client ~]# while true; do curl 192.168.80.200;sleep 1; done
rs1:192.168.80.102
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
rs1:192.168.80.102
rs2:192.168.80.103
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
curl: (7) Failed connect to 192.168.80.200:80; Connection refused
sorry server
sorry server
sorry server
有一定延迟...

local rs

在lvs服务器负载不大时,其自身也可以部署web server作为rs的一员,将自己的地址,也配置为real server即可

real_server 127.0.0.1 80 {
        weight 1
        TCP_CHECK {
                connect_port 80
                connect_timeout 1
                nb_get_retry 2
                delay_before_retry 1
        }
}
updatedupdated2020-10-202020-10-20
加载评论