mysql高可用集群

mysql常用的高可用集群解决方案介绍,以及其中mha集群、galera cluster的配置实现

mysql的常用高可用解决方案

  • mmm :mutil master replication manager
  • mha: master high avaliablity
  • gelera cluster
  • GR:group replcation

mmm主主复制管理器

multi-master- replication manager for mysql,mysql主主复制管理器,是一套脚本,实现了对mysql replication的监控和故障转移,并能管理mysql 主主复制的配置,同一个时刻只有一个节点可写;

https://mysql-mmm.org/

mha master high avaliablity

master high avaiablity,对主节点进行监控,可实现故障转移,从节点提升为主节点,基于主从复制实现,客户端需要配合实现,mha集群要求至少3台,1主、2从,其中1个从做备份的主

淘宝tmha支持一主一从;

官方网站和详细wiki文档如下:

https://github.com/yoshinorim/mha4mysql-manager

https://github.com/yoshinorim/mha4mysql-manager/wiki

mha集群架构

image-20201003163522394

一个mha的管理节点,可以同时管理多套mysql的主从集群;其中每一套主从集群有1主,若干个从,一般至少2个;

mha工作原理

mha管理节点,配置了整套集群的各个节点的信息,它了解各个节点的信息,通过脚本不管监测其状态;当发生了主节点故障时

  1. 保存主节点的binlog;(前提是binlog和数据文件单独存放,不随着数据文件一起损坏)
  2. 找出包含最新中继日志文件的从节点;(因每个节点的接收速度会有差异)
  3. 应用有最新位置的的节点的中继日志到其他节点(使得各个从节点的中继日志保持一致)
  4. 应用主节点的binlog,崩溃前一刻和从节点中继日志位置,之间的差异部分
  5. 从所有从节点中选举出来一个新的主节点;
  6. 使得其他从节点指向新的主节点;

示意图如下:

image-20201003164725326

mha安装

官方安装文档:

https://github.com/yoshinorim/mha4mysql-manager/wiki/Installation#downloading-mha-node-and-mha-manager

mha软件组成

主要有2个软件包组成:管理节点需要安装manager包和node包;被管理节点(即数据库,从库和主库所在的节点)只需要安装node包;管理节点也可以复用数据库实例所在的节点

[root@host1 ~]# rpm -ql mha4mysql-manager 
/usr/bin/masterha_check_repl
#检查复制状态
/usr/bin/masterha_check_ssh
#检查ssh状态
/usr/bin/masterha_check_status
#检查集群状态
/usr/bin/masterha_conf_host
#配置节点信息
/usr/bin/masterha_manager
#启动mha的主脚本
/usr/bin/masterha_master_monitor
#主节点监测
/usr/bin/masterha_master_switch
#主节点故障转移
/usr/bin/masterha_secondary_check
/usr/bin/masterha_stop

[root@host1 ~]# rpm -ql mha4mysql-node 
/usr/bin/apply_diff_relay_logs
#应用不同的relay log,使得从节点的relaylog先保持一致
/usr/bin/filter_mysqlbinlog
/usr/bin/purge_relay_logs
/usr/bin/save_binary_logs
#保存主节点binlog

#配置文件
global 为每个appliance提供默认配置,一个application就是一个mysql集群
application 配置的是一个mysql集群的信息

安装node包

​ 所有mysql 实例所在节点需要安装node包

[root@host3 ~]# yum install perl-DBD-MySQL
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Package perl-DBD-MySQL-4.023-6.el7.x86_64 already installed and latest version
Nothing to do
[root@host3 ~]# rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm 
Preparing...                          ################################# [100%]
	package mha4mysql-node-0.56-0.el6.noarch is already installed

# 安装依赖的per-dbd-mysql包,然后安装node包

安装manager包

​ manager节点需要安装依赖包,node包,然后是manager包

# 安装4个依赖包
  198  yum install perl-DBD-MySQL
  201  yum install perl-Config-Tiny
  202   yum install perl-Log-Dispatch
  203  yum install perl-Parallel-ForkManager
# 安装node,manager包
rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm
rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm 

mha集群配置

实验环境:

  • 192.168.80.100:管理节点
  • 80.101:mysql主节点
  • 80.102:mysql从
  • 80.103:mysql从
  1. mha管理节点安装2个mha包,manager和node包

  2. mha被管理节点,即mysql实例所在节点,部署1一个node包

  3. 管理节点创建集群配置文件:包含了mysql主从集群的信息,

  4. 配置一个主节点

  5. 配置2个从节点

  6. 所有节点实现ssh-key验证

  7. mha管理节点用脚本测试集群状态

  8. down掉主节点检查,主节点是否成功迁移

  9. mha管理节点安装2个mha包,manager和node包

    # 80.100上,安装manager和node以及其依赖包
       
    # 安装4个依赖包
      198  yum install perl-DBD-MySQL
      201  yum install perl-Config-Tiny
      202   yum install perl-Log-Dispatch
      203  yum install perl-Parallel-ForkManager
    # 安装node,manager包
    rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm
    rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm 
    
  10. mha被管理节点,即mysql实例所在节点,部署1一个node包

    # 101、102、103上都安装node包
       
    [root@host3 ~]# yum install perl-DBD-MySQL
    [root@host3 ~]# rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm 
    
  11. 管理节点创建集群配置文件:包含了mysql主从集群的信息,

    # 100上,配置集群信息配置文件,
    [root@host1 mastermha]# cat !$
    cat /etc/mastermha/app1.conf
    [server_default]
    user=mhauser
    password=wang
    #可以是mysql的root用户,为了安全,可以单独创建已经所有权限的用户
       
    manager_workdir=/data/mha/app1/
    manager_log=/data/mha/app1/manager.log
    remote_workdir=/data/mha/app1/
    #管理节点和mysql节点的工作目录和日志文件
       
    ssh_user=root
    #ssh管理,远程登陆mysql节点启动node包的脚本时的用户,一般是root,(linux的root用户)
       
    repl_user=repl
    repl_password=wang
    #复制用户
    ping_interval=1
       
    #3个mysql实例节点的信息,只需ip或能解析的主机名即可,因为会自动判断是否主从;
    [server1]
    hostname=192.1689.80.101
    candidate_master=1 #该参数可选,加了表示从节点在重新选出主时,该节点的优先级要高些;
    [server2]
    hostname=192.168.80.102
    candidate_master=2
    [server3]
    hostname=192.168.80.103
    
  12. 配置一个主节点

    [root@host2 ~]# vim /etc/my.cnf
       
    [mysqld]
    server_id=2
    log-bin
    skip-name-resolve=on
    #修改配置文件
       
    MariaDB [(none)]> show master status;
    +--------------------+----------+--------------+------------------+
    | File               | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +--------------------+----------+--------------+------------------+
    | mariadb-bin.000005 |      245 |              |                  |
    +--------------------+----------+--------------+------------------+
    1 row in set (0.00 sec)
       
    MariaDB [(none)]> grant replication slave on *.* to repl@'192.168.80.%' identified by 'wang';
    Query OK, 0 rows affected (0.01 sec)
       
    MariaDB [(none)]> grant all on *.* to mhauser@'192.168.80.%' identified by 'wang';
    Query OK, 0 rows affected (0.00 sec)
    #创建复制用户,和拥有几乎所有权限的mhauser用户,(在app1.conf中配置引用)
    #此处采用的是干净的新节点做主,如是已经具有数据的主,需要先做数据快照,将其恢复到每个从节点,再从快照出的binlog位置开始复制;
    
  13. 配置2个从节点

    [root@host3 ~]# vim /etc/my.cnf
    [mysqld]
    server_id=3
    log-bin
    skip-name-resolve=on
    read_only=on
    relay-log=on
       
    [root@host4 ~]# cat /etc/my.cnf
    [mysqld]
    server_id=4
    log-bin
    read-only
    skip_name_resolve=on
    relay-log=on
       
    #修改配置文件,配置为从节点,
       
    #配置change master语句,指明从节点的信息,开始复制的binlog位置,
    MariaDB [(none)]> CHANGE MASTER TO
        ->   MASTER_HOST='192.168.80.101',
        ->   MASTER_USER='repl',
        ->   MASTER_PASSWORD='wang',
        ->   MASTER_PORT=3306,
        ->   MASTER_LOG_FILE='mariadb-bin.000005',
        ->   MASTER_LOG_POS=245,
        ->   MASTER_CONNECT_RETRY=10;
    #开启从节点的2个线程,查看线程状态,都为yes为正常;
    MariaDB [(none)]> start slave;
    MariaDB [(none)]> show slave status\G;
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: 192.168.80.101
                      Master_User: repl
                      Master_Port: 3306
                    Connect_Retry: 10
                  Master_Log_File: mariadb-bin.000005
              Read_Master_Log_Pos: 532
                   Relay_Log_File: on.000002
                    Relay_Log_Pos: 818
            Relay_Master_Log_File: mariadb-bin.000005
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
    #主节点做数据修改操作,查看从节点是否相应的同步;
    MariaDB [(none)]> create database db2;
    Query OK, 1 row affected (0.00 sec)
    # 主节点创建2个库,
    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | db1                |
    | db2                |
    | mysql              |
    | performance_schema |
    | test               |
    +--------------------+
    #2个从节点都可以看到,同步创建了db1和db2库;
    MariaDB [(none)]> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | db1                |
    | db2                |
    | mysql              |
    | performance_schema |
    | test               |
    #主从配置完成
    
  14. 所有节点实现ssh-key验证

    [root@host1 mastermha]# ssh-keygen -t rsa
    [root@host1 mastermha]# ssh-copy-id 192.168.80.100
    # 生成密钥对,自己对自己做key认证
       
    # 将.ssh目录下所有文件,复制到其他3个mysql节点,利用同一套key文件,实现了相互的免密登陆
    # 也可以一个个节点的来,但是较为麻烦
    [root@host1 mastermha]# scp -r /root/.ssh/* 192.168.80.101:/root/.ssh/
       
    [root@host1 mastermha]# scp -r /root/.ssh/* 192.168.80.102:/root/.ssh/
       
    [root@host1 mastermha]# scp -r /root/.ssh/* 192.168.80.103:/root/.ssh/
    
  15. mha管理节点用脚本测试集群状态

    [root@host1 mastermha]# masterha_check_ssh --conf=/etc/mastermha/app1.conf
    Sun Oct  4 16:19:30 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Sun Oct  4 16:19:30 2020 - [info] Reading application default configuration from /etc/mastermha/app1.conf..
    Sun Oct  4 16:19:30 2020 - [info] Reading server configuration from /etc/mastermha/app1.conf..
    Sun Oct  4 16:19:30 2020 - [info] Starting SSH connection tests..
    Sun Oct  4 16:19:31 2020 - [debug] 
    Sun Oct  4 16:19:30 2020 - [debug]  Connecting via SSH from root@192.168.80.101(192.168.80.101:22) to 
    Sun Oct  4 16:19:33 2020 - [info] All SSH connection tests passed successfully.
       
    #注意:没有下划线
    [server default]
       
    #检查,主从集群复制是否健康
    [root@host1 mastermha]# masterha_check_repl --conf=/etc/mastermha/app1.conf
    Sun Oct  4 16:21:40 2020 - [info] Slaves settings check done.
    Sun Oct  4 16:21:40 2020 - [info] 
    192.168.80.101(192.168.80.101:3306) (current master)
     +--192.168.80.102(192.168.80.102:3306)
     +--192.168.80.103(192.168.80.103:3306)
       
    Sun Oct  4 16:21:40 2020 - [info] Checking replication health on 192.168.80.102..
    Sun Oct  4 16:21:40 2020 - [info]  ok.
    Sun Oct  4 16:21:40 2020 - [info] Checking replication health on 192.168.80.103..
    Sun Oct  4 16:21:40 2020 - [info]  ok.
    Sun Oct  4 16:21:40 2020 - [warning] master_ip_failover_script is not defined.
    Sun Oct  4 16:21:40 2020 - [warning] shutdown_script is not defined.
    Sun Oct  4 16:21:40 2020 - [info] Got exit code 0 (Not master dead).
       
    MySQL Replication Health is OK
    
  16. down掉主节点检查,主节点是否成功迁移

#启动脚本,默认前台启动,只能做一次迁移,做完后退出,
#设置合理选项可以一直后台监测
#停止host2的主节点mysql,脚本做完主节点迁移后,退出
root@host2 ~]# systemctl stop mariadb

[root@host1 mastermha]# masterha_manager --conf=/etc/mastermha/app1.conf
Sun Oct  4 16:23:33 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Oct  4 16:23:33 2020 - [info] Reading application default configuration from /etc/mastermha/app1.conf..
Sun Oct  4 16:23:33 2020 - [info] Reading server configuration from /etc/mastermha/app1.conf..

  Creating /data/mha/app1 if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /var/lib/mysql, up to mariadb-bin.000005
Sun Oct  4 16:24:19 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Oct  4 16:24:19 2020 - [info] Reading application default configuration from /etc/mastermha/app1.conf..
Sun Oct  4 16:24:19 2020 - [info] Reading server configuration from /etc/mastermha/app1.conf..

#host3原来的从节点提升为了主,没有从节点的复制线程了,创建库,也可以同步了host4上;
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| db1                |
| db2                |
| db3                |
| mysql              |
| performance_schema |
| test               |
+--------------------+
7 rows in set (0.00 sec)

MariaDB [(none)]> show slave status\G;
Empty set (0.00 sec)

MariaDB [(none)]> create database db4;
Query OK, 1 row affected (0.00 sec)

# host4指向了新的主节点,host3
MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.80.102
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: mariadb-bin.000004
          Read_Master_Log_Pos: 245
               Relay_Log_File: on.000002
                Relay_Log_Pos: 531
        Relay_Master_Log_File: mariadb-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| db1                |
| db2                |
| db3                |
| db4                |
| mysql              |
| performance_schema |
| test               |
+--------------------+

查看过程中日志

#工作目录下生成的日志,主节点收集的binlog
[root@host1 mastermha]# ll /data/mha/app1/
total 28
-rw-r--r-- 1 root root     0 Oct  4 16:24 app1.failover.complete
-rw-r--r-- 1 root root 23992 Oct  4 16:24 manager.log
-rw-r--r-- 1 root root   264 Oct  4 16:24 saved_master_binlog_from_192.168.80.101_3306_20201004162419.binlog

#日志中记录了主节点失败迁移的过程
[root@host1 mastermha]# cat /data/mha/app1/manager.log 
----- Failover Report -----

app1: MySQL Master failover 192.168.80.101(192.168.80.101:3306) to 192.168.80.102(192.168.80.102:3306) succeeded

Master 192.168.80.101(192.168.80.101:3306) is down!

Check MHA Manager logs at host1.b.com:/data/mha/app1/manager.log for details.

Started automated(non-interactive) failover.
The latest slave 192.168.80.102(192.168.80.102:3306) has all relay logs for recovery.
Selected 192.168.80.102(192.168.80.102:3306) as a new master.
192.168.80.102(192.168.80.102:3306): OK: Applying all logs succeeded.
192.168.80.103(192.168.80.103:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.80.103(192.168.80.103:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.80.102(192.168.80.102:3306)
192.168.80.102(192.168.80.102:3306): Resetting slave info succeeded.
Master failover to 192.168.80.102(192.168.80.102:3306) completed successfully.

galera cluster

​ galera cluster:mysql extend with the write set replication,mysql基于写集的复制;通过wsrep协议实现全局的复制,所有节点均可读写,多主读写

​ galera cluster是集成了galera插件的mysql集群,属于mysql集群的高可用解决方案,galeracluster有2个版本,分别是percona xtradb cluster和mariadb cluster,galera集群具有多主特性,稳健、数据一致性、完整性、高性能方面都较为出色

官方文档:

https://galeracluster.com/products/

https://mariadb.com/kb/en/what-is-mariadb-galera-cluster/

https://www.percona.com/doc/percona-xtradb-cluster/LATEST/intro.html

galera cluster架构

​ 如下图所示:三个节点组成的msyql集群,3个都是主,都可读可写,3个节点对等,写入某节点的数据会自动同步到其他节点,不共享任何数据、高度冗余架构

image-20201004172056518

galera cluster特性

  • 多主:多点读写,
  • 同步复制:集群之间节点数据同步,没有延迟,
  • 并发复制:节点apply数据时,并行执行,更好的性能;
  • 故障切换:出现数据库故障时,其他节点可以继续提供服务
  • 热插拔:某节点挂掉,监控程序发现的快的话,可以很快切换,不可服务时间很小
  • 自动节点克隆:新增节点,后下线某节点维护时,不需要手动移动数据,galera cluster会自动拉取在线节点数据,最终集群数据一致
  • 对应用透明:对应用程序透明

galera cluster工作过程

image-20201004172716549

​ 数据写入某节点时,由该节点单独处理,一系列操作之后需要提交事务时,事务id和写的数据集会再集群内各个节点进行同步,每个节点都接受该事务的情况下事务才能提交成功,同时数据集写入每一个节点,也保证了数据的一致性

galera cluster软件组成

​ galera cluster包含2个组件:

  • galera replication library(galera 3)
  • wsrep:mysql extend with the write set replication(mysql带有写集复制的扩展)

​ wsrep协议的软件实现

  • pxc:percona xtradb cluter
  • mariadb galera cluster

galera cluster安装配置

​ 实验采用,mariadb的galera cluster实现,根据官方介绍如下:

MariaDB Galera Cluster is powered by:

MariaDB Server.

The MySQL-wsrep patch for MySQL Server and MariaDB Server developed by Codership. The patch currently supports only Unix-like operating systems.

The Galera wsrep provider library.
#主要包含mariadb-server wsrep补丁包,wsrep 库包,其中mariadb10.1后,wsrep补丁包被包含进mariadb-server包

In MariaDB 10.1 and later, the MySQL-wsrep patch has been merged into MariaDB Server. This means that the functionality of MariaDB Galera Cluster can be obtained by installing the standard MariaDB Server packages and the Galera wsrep provider library package. The following Galera version corresponds to each MariaDB Server version:

In MariaDB 10.4 and later, MariaDB Galera Cluster uses Galera 4. This means that the MySQL-wsrep patch is version 26 and the Galera wsrep provider library is version 4.
In MariaDB 10.3 and before, MariaDB Galera Cluster uses Galera 3. This means that the MySQL-wsrep patch is version 25 and the Galera wsrep provider library is version 3.

1、配置yum源

#3个节点都配置清华大学的镜像源,采用的是10.3.23版本mariadb
cat > /etc/yum.repos.d/mariadb.repo <<'EOF'
[mariadb]  
name = MariaDB  
baseurl =https://mirrors.tuna.tsinghua.edu.cn/mariadb/mariadb-10.3.23/yum/centos7-amd64/
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB  
gpgcheck=1  
EOF

2、下载

https://mariadb.com/kb/en/yum/#installing-mariadb-galera-cluster-with-yum参考文档,不同版本,需要安装的包会有差异

#3个节点都下载相同的包
[root@host2 yum.repos.d]# yum install mariadb-server galera mariadb-client mariadb-shared mariadb-backup mariadb-common

Installed:
  MariaDB-compat.x86_64 0:10.3.23-1.el7.centos                                                 MariaDB-server.x86_64 0:10.3.23-1.el7.centos                                                

Dependency Installed:
  MariaDB-client.x86_64 0:10.3.23-1.el7.centos    MariaDB-common.x86_64 0:10.3.23-1.el7.centos    libaio.x86_64 0:0.3.109-13.el7           perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7   
  perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7     perl-DBI.x86_64 0:1.627-4.el7                   perl-Data-Dumper.x86_64 0:2.145-3.el7    perl-IO-Compress.noarch 0:2.061-2.el7          
  perl-Net-Daemon.noarch 0:0.48-5.el7             perl-PlRPC.noarch 0:0.2020-14.el7               rsync.x86_64 0:3.1.2-10.el7             

Replaced:
  mariadb-libs.x86_64 1:5.5.56-2.el7                                                                                                                                        
Complete!

3、配置集群信息

[root@host2 yum.repos.d]# vim /etc/my.cnf.d/server.cnf
#配置该配置文件,必选参数如下,其中smm.so库文件路径,和gcomm的地址需要配置,其他均可保持默认
#参考:https://mariadb.com/kb/en/configuring-mariadb-galera-cluster/

[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.80.101,192.168.80.102,192.168.80.103"
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
bind-address=0.0.0.0

#再复制到其余2个节点
[root@host2 yum.repos.d]# scp /etc/my.cnf.d/server.cnf 192.168.80.102:/etc/my.cnf.d/
[root@host2 yum.repos.d]# scp /etc/my.cnf.d/server.cnf 192.168.80.103:/etc/my.cnf.d/

4、引导第一个节点,然后正常启动其他节点

#引导第一个节点时,采用如下命令,报错
[root@host2 yum.repos.d]# /etc/init.d/mysql start --wsrep-new-cluster

#根据错误日志排查,根google搜索相关问题,
[root@host2 yum.repos.d]# cat /var/lib/mysql/host2.b.com.err
2020-10-04 19:48:35 0 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.51172S), skipping check
2020-10-04 19:49:05 0 [Note] WSREP: view((empty))
2020-10-04 19:49:05 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
	 at gcomm/src/pc.cpp:connect():160
2020-10-04 19:49:05 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():209: Failed to open backend connection: -110 (Connection timed out)
2020-10-04 19:49:05 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1458: Failed to open channel 'my_wsrep_cluster' at 'gcomm://192.168.80.101,192.168.80.102,192.168.80.103': -110 (Connection timed out)
2020-10-04 19:49:05 0 [ERROR] WSREP: gcs connect failed: Connection timed out
2020-10-04 19:49:05 0 [ERROR] WSREP: wsrep::connect(gcomm://192.168.80.101,192.168.80.102,192.168.80.103) failed: 7
2020-10-04 19:49:05 0 [ERROR] Aborting

# 参考链接
https://stackoverflow.com/questions/37212127/mariadb-gcomm-backend-connection-failed-110
# 回答如下:
Since MariaDB 10.1.8, systemd is the new init and it affects the way Galera is bootstrapped on RPM and Debian based Linux distributions (in my case Ubuntu 16.04). On previous versions, you would use something like service mysql start --wsrep-new-cluster or service mysqld bootstrap but that doesn't work any more as it fails with:

[ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
To fix this issue run:
# 原因为新版本后应该改为新的引导命令,
galera_new_cluster

#采用新的引导命令,后正常;
[root@host2 yum.repos.d]# galera_new_cluster 
[root@host2 yum.repos.d]# ss -nlt
State      Recv-Q Send-Q                                                 Local Address:Port                                                                Peer Address:Port              
LISTEN     0      80                                                                 *:3306                                                                           *:*                                                                    :::*                  

#连入第一个引导节点,查看galera集群的节点数是1,随后用systemctl start mariadb启动其他2个节点后,
集群节点逐步添加到3,集群配置正常
#任何一个节点写测试,其他节点都可以迅速同步,测试也正常,此处省去截图

[root@host2 yum.repos.d]# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.3.23-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.001 sec)

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size'; 
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size'; 
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size'; 
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

GR组复制

group replication:mysql官方提供的组复制技术,mysql5.7.17后引入,基于paxos算法

https://dev.mysql.com/doc/refman/5.7/en/group-replication.html

分布式数据库简介

tidb

TiDB 是 PingCAP 公司自主设计、研发的开源分布式关系型数据库,是一款同时支持在线事务处理与在线分析处理 (Hybrid Transactional and Analytical Processing, HTAP)的融合型分布式数据库产品,具备水平扩容或者缩容、金融级高可用、实时 HTAP、云原生的分布式数据库、兼容 MySQL 5.7 协议和 MySQL 生态等重要特性。目标是为用户提供一站式 OLTP (Online Transactional Processing)、OLAP (Online Analytical Processing)、HTAP 解决方案。TiDB 适合高可用、强一致要求较高、数据规模较大等各种应用场景。

类比的有google spaner,依赖的基础协议算法paxos

updatedupdated2021-03-092021-03-09
加载评论