Mycat读写分离以及拆库拆表综合实验1:mysql主从以及garela cluster环境准备
数据规划:
Haproxy 集群
haproxy01 node127 192.168.31.127haproxy02 node128 192.168.31.128Mycat集群
mycat01 node119 192.168.31.119mycat02 node118 192.168.31.118MySQL主从复制集群
mysqlm1 node115 192.168.31.115mysqlm2 node116 192.168.31.116mysqls1 node117 192.168.31.117MySQL galera Cluster
pxc1 node123 192.168.31.123pxc2 node124 192.168.31.125pxc3 node126 192.168.31.126部署mysql互为主从、多源复制
编辑node115配置文件
node115# cat /etc/my.cnf datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-bin=/tmp/node115 log-bin-index=/tmp/node115 server-id=115 innodb_file_per_table=1编辑node116配置文件
node116# cat /etc/my.cnflog-bin=/tmp/node116log-bin-index=/tmp/node116server-id=116innodb_file_per_table=1编辑node117配置文件
node117上的配置
master_info_repository=TABLErelay_log_info_repository=TABLEserver-id=117slave_skip_errors = 1062master_info_repository
开启MTS功能后,务必将参数master_info_repostitory设置为TABL,这样性能可以有50%~80%的提升。这是因为并行复制开启后对于元master.info这个文件的更新将会大幅提升,资源的竞争也会变大。在之前InnoSQL的版本中,添加了参数来控制刷新master.info这个文件的频率,甚至可以不刷新这个文件。因为刷新这个文件是没有必要的,即根据master-info.log这个文件恢复本身就是不可靠的。在MySQL 5.7中,推荐master_info_repository设置为TABLE,来减少这部分的开销。
relay_log_info_repository 同理
在node115、node116上执行授权操作mysql> grant replication slave on *.* to 'rep'@'192.168.31.%' identified by 'Mirror-12345';
在配置同步之前先要获取master file以及Pos,请根据实际情况配置
mysql> show master status\G*************************** 1. row *************************** File: node115.000002 Position: 35291277在node116上的配置:
mysql> change master to -> master_host='192.168.31.115', -> master_user='rep', -> master_password='Mirror-12345', -> master_port=3306, -> master_log_file='node115.000002', -> master_log_pos=154;在node115上的配置:
mysql> change master to -> master_host='192.168.31.116', -> master_user='rep', -> master_password='Mirror-12345', -> master_port=3306, -> master_log_file='node116.000002', -> master_log_pos=154;node117上的配置
mysql> set global read_only=1; #只读模式change master tomaster_host='192.168.31.115',master_user='rep',master_password='Mirror-12345',master_port=3306,master_log_file='node115.000002',master_log_pos=154 for channel 'node115'; #以channel区分源change master tomaster_host='192.168.31.116',master_user='rep',master_password='Mirror-12345',master_port=3306,master_log_file='node116.000001',master_log_pos=447 for channel 'node116';确定replication已经正常运行
mysql> show slave status\G[for channel chanelname] Slave_IO_Running: Yes Slave_SQL_Running: Yes在node115或者node116上使用存储过程来确定同步
#创建表
CREATE TABLE `t3` ( `id` INT (11) NOT NULL AUTO_INCREMENT, `user_id` VARCHAR (20) NOT NULL, `vote_id` INT (11) NOT NULL, `group_id` INT (11) NOT NULL, `create_time` datetime NOT NULL, PRIMARY KEY (`id`), KEY `index_user_id` (`user_id`) USING HASH ) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8#创建存储过程
DELIMITER //CREATE PROCEDURE `t5`(IN n int) BEGINDECLARE i INT DEFAULT 1;WHILE (i <= n ) DO INSERT INTO t3(user_id,vote_id,group_id,create_time ) VALUES (FLOOR(RAND() * 1000),FLOOR(RAND() * 1000),FLOOR(RAND() * 100) ,now() ); set i=i+1; END WHILE;END; //#调用存储
call t5(10000)#在node117上确定数据已经同步
mysql> select count(*) from t3;部署galera cluster
[root@node123 ~]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm# yum install Percona-XtraDB-Cluster-57# systemctl start mysql# mysql -uroot -pmysql> grant all on *.* to sstuser@'192.168.31.%' identified by 'Mirror-12345'; #授权sst用户 # systemctl stop mysql修改wsrep配置文件
[root@node123 ~]# cat /etc/percona-xtradb-cluster.conf.d/wsrep.cnf |grep -v '#'[mysqld]wsrep_provider=/usr/lib64/galera3/libgalera_smm.sowsrep_cluster_address=gcomm://192.168.31.123,192.168.31.125,192.168.31.126binlog_format=ROWdefault_storage_engine=InnoDBwsrep_slave_threads= 8wsrep_log_conflictsinnodb_autoinc_lock_mode=2wsrep_node_address=192.168.31.123wsrep_cluster_name=pxc-clusterwsrep_node_name=node123pxc_strict_mode=ENFORCINGwsrep_sst_method=xtrabackup-v2wsrep_sst_auth="sstuser:Mirror-12345"修改其余两个节点的配置文件,与第一个完全相同,除了一下两个参数
wsrep_node_name=node125wsrep_node_address=192.168.31.125wsrep_node_name=node126wsrep_node_address=192.168.31.126启动第一个节点
[root@node123 ~]# systemctl start mysql@bootstrap.servicemysql> show status like 'wsrep%'; #请关注下面的参数| wsrep_cluster_size | 1 启动第二个节点
[root@node125 ~]# systemctl start mysql@bootstrap.servicemysql> show status like 'wsrep%'; #请关注下面的参数| wsrep_cluster_size | 2启动第三个节点
[root@node126 ~]# systemctl start mysql@bootstrap.servicemysql> show status like 'wsrep%'; #请关注下面的参数| wsrep_cluster_size | 3请使用之前提供的存储过程验证rep功能。
需要说明的是:这个是测试环境,包括之前的mysql主从,以及当前的galera cluster,所以没有全量备份这个步骤,在生产环境中,需要注意。