您现在的位置是:首页 > 文章详情

Slave被误写入数据如何恢复到主库

日期:2024-02-26点击:179

背景

在GreatSQL主从复制环境中,有时候可能会出现一些误操作,将本应该写入到主库的数据写入到了从库,导致主从数据不一致,影响数据同步。是否可以将写入从库的数据同步写入主库呢?

测试环境

角色 IP地址 数据库开放端口 版本
主库 192.168.137.179 3308 GreatSQL 8.0.32
从库 192.168.137.180 3308 GreatSQL 8.0.32

复制链路:

greatsql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for source to send event Master_Host: 192.168.137.179 Master_User: root Master_Port: 3308 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 157 Relay_Log_File: oracle_dts-relay-bin.000002 Relay_Log_Pos: 367 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes 

表数据

主库

greatsql> select * from dept; +--------+------------+----------+ | DEPTNO | DNAME | LOC | +--------+------------+----------+ | 10 | ACCOUNTING | NEW YORK | | 20 | RESEARCH | DALLAS | | 30 | SALES | CHICAGO | | 40 | OPERATIONS | BOSTON | | 60 | it | 成都 | +--------+------------+----------+ 5 rows in set (0.00 sec) greatsql> insert into dept select 70,'IT','CTU'; Query OK, 1 row affected (0.01 sec) Records: 1 Duplicates: 0 Warnings: 0 greatsql> commit; Query OK, 0 rows affected (0.00 sec) 

从库

greatsql> select * from dept; +--------+------------+----------+ | DEPTNO | DNAME | LOC | +--------+------------+----------+ | 10 | ACCOUNTING | NEW YORK | | 20 | RESEARCH | DALLAS | | 30 | SALES | CHICAGO | | 40 | OPERATIONS | BOSTON | | 60 | it | 成都 | | 70 | IT | CTU | +--------+------------+----------+ 6 rows in set (0.00 sec) 

主库写入的数据正常同步到从库

在从库写入数据

greatsql> insert into dept select 80,'IT','SZ'; Query OK, 1 row affected (0.01 sec) Records: 1 Duplicates: 0 Warnings: 0 greatsql> insert into dept select 90,'SALES','SZ'; Query OK, 1 row affected (0.01 sec) Records: 1 Duplicates: 0 Warnings: 0 

从库数据

greatsql> select * from dept; +--------+------------+----------+ | DEPTNO | DNAME | LOC | +--------+------------+----------+ | 10 | ACCOUNTING | NEW YORK | | 20 | RESEARCH | DALLAS | | 30 | SALES | CHICAGO | | 40 | OPERATIONS | BOSTON | | 60 | it | 成都 | | 70 | IT | CTU | | 80 | IT | SZ | | 90 | SALES | SZ | +--------+------------+----------+ 8 rows in set (0.00 sec) 

主库数据

greatsql> select * from dept; +--------+------------+----------+ | DEPTNO | DNAME | LOC | +--------+------------+----------+ | 10 | ACCOUNTING | NEW YORK | | 20 | RESEARCH | DALLAS | | 30 | SALES | CHICAGO | | 40 | OPERATIONS | BOSTON | | 60 | it | 成都 | | 70 | IT | CTU | +--------+------------+----------+ 6 rows in set (0.01 sec) 

此时从库写入的数据在主库中并没有出现

解析从库的二进制日志

$ mysqlbinlog -vv --base64-output=decode-rows binlog.000002>b002.sql BEGIN /*!*/; #at 354 #240221 16:10:25 server id 18001 end_log_pos 416 CRC32 0xcc81584b Table_map: `scott`.`dept` mapped to number 101 #has_generated_invisible_primary_key=0 #at 416 #240221 16:10:25 server id 18001 end_log_pos 462 CRC32 0x5149e38a Write_rows: table id 101 flags: STMT_END_F ###INSERT INTO `scott`.`dept` ###SET ###@1=80 /* INT meta=0 nullable=0 is_null=0 */ ###@2='IT' /* VARSTRING(56) meta=56 nullable=1 is_null=0 */ ###@3='SZ' /* VARSTRING(52) meta=52 nullable=1 is_null=0 */ #at 462 #240221 16:10:25 server id 18001 end_log_pos 493 CRC32 0xab795e4a Xid = 34 

可以看到写入的从库写入的数据在 binlog.000002,我们可以通过 grep 从库的 server id 确定日志文件中有没有在从库写入的数据。

复制从库日志到主库

$ scp binlog.000002 192.168.137.179:/tmp/ Warning: Permanently added '192.168.137.179' (ECDSA) to the list of known hosts. root@192.168.137.179's password: binlog.000002 100% 836 1.1MB/s 00:00 

应用从库的二进制日志

应用从库的日志到主库

$ mysqlbinlog binlog.000002|mysql -uroot -p -h127.1 -P3308 

主库应用从库二进制日志时,从库二进制日志信息未发生变化

greatsql> show binary logs; +---------------+-----------+-----------+ | Log_name | File_size | Encrypted | +---------------+-----------+-----------+ | binlog.000001 | 498 | No | | binlog.000002 | 836 | No | | binlog.000003 | 237 | No | +---------------+-----------+-----------+ 3 rows in set (0.00 sec) 

主从复制链路状态正常

greatsql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for source to send event Master_Host: 192.168.137.179 Master_User: root Master_Port: 3308 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 1059 Relay_Log_File: oracle_dts-relay-bin.000002 Relay_Log_Pos: 1269 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes 

可以看到主库在应用从库产生的二进制日志时,从库没有重复应用这些二进制日志(By default, the replication I/O (receiver) thread does not write binary log events to the relay log if they have the replica's server ID (this optimization helps save disk usage). ),出现主键冲突,导致复制状态出错

查看主库数据

greatsql> select * from dept; +--------+------------+----------+ | DEPTNO | DNAME | LOC | +--------+------------+----------+ | 10 | ACCOUNTING | NEW YORK | | 20 | RESEARCH | DALLAS | | 30 | SALES | CHICAGO | | 40 | OPERATIONS | BOSTON | | 60 | it | 成都 | | 70 | IT | CTU | | 80 | IT | SZ | | 90 | SALES | SZ | +--------+------------+----------+ 8 rows in set (0.00 sec) 

后续测试,主库写入数据可正常同步到从库。


Enjoy GreatSQL :)

关于 GreatSQL

GreatSQL是适用于金融级应用的国内自主开源数据库,具备高性能、高可靠、高易用性、高安全等多个核心特性,可以作为MySQL或Percona Server的可选替换,用于线上生产环境,且完全免费并兼容MySQL或Percona Server。

相关链接: GreatSQL社区 Gitee GitHub Bilibili

GreatSQL社区:

image

社区有奖建议反馈: https://greatsql.cn/thread-54-1-1.html

社区博客有奖征稿详情: https://greatsql.cn/thread-100-1-1.html

(对文章有疑问或者有独到见解都可以去社区官网提出或分享哦~)

技术交流群:

微信&QQ群:

QQ群:533341697

微信群:添加GreatSQL社区助手(微信号:wanlidbc )好友,待社区助手拉您进群。

原文链接:https://my.oschina.net/GreatSQL/blog/11044714
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章