best practices for mysql high availability in 2017
play

Best practices for MySQL High Availability in 2017 Colin Charles, - PowerPoint PPT Presentation

Best practices for MySQL High Availability in 2017 Colin Charles, Chief Evangelist, Percona Inc. colin.charles@percona.com / byte@bytebot.net http://www.bytebot.net/blog/ | @bytebot on Twitter Percona Live Santa Clara, California, USA 24 April


  1. Why is MariaDB Server GTID is different compared to MySQL 5.6? • MySQL 5.6 GTID does not support multi-source replication (only 5.7 supports this) • Supports —log-slave-updates=0 for efficiency (like 5.7) • Enabled by default • Turn it on without having to restart the topology (just like 5.7) 40

  2. Crash-safe slave (w/InnoDB DML) • Replace non-transactional file relay_log.info with transactional mysql.rpl_slave_state • Changes to rpl_slave_state are transactionally recovered after crash along with user data. 41

  3. Crash-safe slaves in 5.6? • Not using GTID • you can put relay-log.info into InnoDB table, that gets updated along w/trxn • Using GTID • relay-log.info not used. Slave position stored in binlog on slave (—log-slave- updates required) • Using parallel replication • Uses a different InnoDB table for this use case 42

  4. Replication domains • Keep central concept that replication is just applying events in-order from a serial binlog stream. • Allow multi-source replication with multiple active masters • Let’s the DBA configure multiple independent binlog streams (one per active master: mysqld --git-domain-id=# ) • Events within one stream are ordered the same across entire replication topology • Events between different streams can be in different order on different servers • Binlog position is one ID per replication domain 43

  5. Parallel replication • Multi-source replication from different masters executed in parallel • Queries from different domains are executed in parallel • Queries that are run in parallel on the master are run in parallel on the slave (based on group commit). • Transactions modifying the same table can be updated in parallel on the slave! • Supports both statement based and row based replication. 44

  6. All in… sometimes it can get out of sync • Changed information on slave directly • --replication-ignore-db with fully qualified queries • Statement based replication • Binlog corruption on master • non-deterministic SQL (UPDATE/ • PURGE BINARY LOGS issued and DELETE with LIMIT and without ORDER BY) not enough files to update slave • triggers & stored procedures • read_buffer_size larger than max_allowed_packet • Master in MyISAM, slave in InnoDB • Bugs? (deadlocks) 45

  7. Replication Monitoring • Percona Toolkit is important • pt-slave-find: find slave information from master • pt-table-checksum: online replication consistency check • executes checksum queries on master • pt-table-sync: synchronise table data efficiently • changes data, so backups important 46

  8. Replication Monitoring with PMM •http://pmmdemo.percona.com/ 47

  9. Statement Based Replication Binlog $ mysqlbinlog mysql-bin.000001 # at 3134 #140721 13:59:57 server id 1 end_log_pos 3217 CRC32 0x974e3831 Querythread_id=9 exec_time=0 error_code=0 SET TIMESTAMP=1405943997/*!*/; BEGIN /*!*/; # at 3217 #140721 13:59:57 server id 1 end_log_pos 3249 CRC32 0x8de28161 Intvar SET INSERT_ID=2/*!*/; # at 3249 #140721 13:59:57 server id 1 end_log_pos 3370 CRC32 0x121ef29f Querythread_id=9 exec_time=0 error_code=0 SET TIMESTAMP=1405943997/*!*/; insert into auto (data) values ('a test 2') /*!*/; # at 3370 #140721 13:59:57 server id 1 end_log_pos 3401 CRC32 0x34354945 Xid = 414 COMMIT/*!*/; 48

  10. Dynamic replication variable control • SET GLOBAL binlog_format=‘STATEMENT’ | ‘ROW’ | ‘MIXED’ • Can also be set as a session level • Dynamic replication filtering variables on MariaDB 5.3+, MySQL 5.7+ 49

  11. Row based replication event > mysqlbinlog mysql-bin.* # at 3401 #140721 14:03:59 server id 1 end_log_pos 3477 CRC32 0xa37f424a Query thread_id=9 exec_time=0 error_code=0 SET TIMESTAMP=1405944239.559237/*!*/; BEGIN /*!*/; # at 3477 #140721 14:03:59 server id 1 end_log_pos 3529 CRC32 0xf4587de5 Table_map: `demo`.`auto` mapped to number 80 # at 3529 #140721 14:03:59 server id 1 end_log_pos 3585 CRC32 0xbfd73d98 Write_rows: table id 80 flags: STMT_END_F BINLOG ' rwHNUxMBAAAANAAAAMkNAAAAAFAAAAAAAAEABGRlbW8ABGF1dG8AAwMRDwMGZAAE5X1Y9A== rwHNUx4BAAAAOAAAAAEOAAAAAFAAAAAAAAEAAgAD//gDAAAAU80BrwiIhQhhIHRlc3QgM5g9178= '/*!*/; # at 3585 #140721 14:03:59 server id 1 end_log_pos 3616 CRC32 0x5f422fed Xid = 416 COMMIT/*!*/; 50

  12. mysqlbinlog versions • ERROR: Error in Log_event::read_log_event(): 'Found invalid event in binary log', data_len: 56, event_type: 30 • 5.6 ships with a “streaming binlog backup server” - v.3.4; MariaDB 10 doesn’t - v.3.3 (fixed in 10.2 - MDEV-8713) • GTID variances! 51

  13. GTID # at 471 #140721 14:20:01 server id 1 end_log_pos 519 CRC32 0x209d8843 GTID [commit=yes] SET @@SESSION.GTID_NEXT= 'ff89bf58-105e-11e4-b2f1-448a5b5dd481:2'/*!*/; # at 519 #140721 14:20:01 server id 1 end_log_pos 602 CRC32 0x5c798741 Query thread_id=3 exec_time=0 error_code=0 SET TIMESTAMP=1405945201.329607/*!*/; BEGIN /*!*/; # at 602 # at 634 #140721 14:20:01 server id 1 end_log_pos 634 CRC32 0xa5005598 Intvar SET INSERT_ID=5/*!*/; #140721 14:20:01 server id 1 end_log_pos 760 CRC32 0x0b701850 Query thread_id=3 exec_time=0 error_code=0 SET TIMESTAMP=1405945201.329607/*!*/; insert into auto (data) values ('a test 5 gtid') /*!*/; # at 760 #140721 14:20:01 server id 1 end_log_pos 791 CRC32 0x497a23e0 Xid = 31 COMMIT/*!*/; 52

  14. SHOW SLAVE STATUS mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server1 Master_User: repluser Master_Port: 3306 ... Master_Log_File: server1-binlog.000008 <- io_thread (read) Read_Master_Log_Pos: 436614719 <- io_thread (read) Relay_Log_File: server2-relaylog.000007 <- io_thread (write) Relay_Log_Pos: 236 <- io_thread (write) Relay_Master_Log_File: server1-binlog.000008 <- sql_thread Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Exec_Master_Log_Pos: 436614719 <- sql_thread ... Seconds_Behind_Master: 0 53

  15. Slave prefetching • Replication Booster • https://github.com/yoshinorim/replication-booster-for-mysql • Prefetch MySQL relay logs to make the SQL thread faster • Tungsten has slave prefetch • Percona Server till 5.6 + MariaDB till 10.1 have InnoDB fake changes 54

  16. What replaces slave prefetching? • In Percona Server 5.7, slave prefetching has been replaced by doing intra-schema parallel replication • Feature removed from XtraDB • MariaDB Server 10.2 will also have this feature removed 55

  17. Tungsten Replicator • Replaces MySQL Replication layer • MySQL writes binlog, Tungsten reads it and uses its own replication protocol • Global Transaction ID • Per-schema multi-threaded slave • Heterogeneous replication: MySQL <-> MongoDB <-> PostgreSQL <-> Oracle • Multi-master replication • Multiple masters to single slave (multi-source replication) • Many complex topologies • Continuent Tungsten (Enterprise) vs Tungsten Replicator (Open Source) 56

  18. In today’s world, what does it offer? • opensource MySQL <-> Oracle replication to aid in your migration • automatic failover without MHA • multi-master with cloud topologies too • Oracle <-> Oracle replication (this is Golden Gate for FREE) • Replication from MySQL to MongoDB • Data loading into Hadoop 57

  19. Galera Cluster • Inside MySQL, a replication plugin (wsrep) • Replaces MySQL replication (but can work alongside it too) • True multi-master, active-active solution • Virtually Synchronous • WAN performance: 100-300ms/commit, works in parallel • No slave lag or integrity issues • Automatic node provisioning 58

  20. 59

  21. Percona XtraDB Cluster 5.7 • Engineering within Percona • Load balancing with ProxySQL (bundled) • PMM integration • Benefits of all the MySQL 5.7 feature-set 60

  22. Group replication • Fully synchronous replication (update everywhere), self-healing, with elasticity, redundancy • Single primary mode supported • MySQL InnoDB Cluster - a combination of group replication, Router, to make magic ! • Recent blogs: • https://www.percona.com/blog/2017/02/24/battle-for-synchronous-replication- in-mysql-galera-vs-group-replication/ • https://www.percona.com/blog/2017/02/15/group-replication-shipped-early/ 61

  23. MySQL NDBCLUSTER • 3 types of nodes: SQL, data and management • MySQL node provides interface to data. Alternate API’s available: LDAP , memcached, native NDBAPI, node.js • Data nodes (NDB storage) • different to InnoDB • transactions synchronously written to 2 nodes(ore more) - replicas • transparent sharding: partitions = data nodes/replicas • automatic node provisioning, online re-partitioning • High performance: 1 billion updates / minute 62

  24. Summary of Replication Performance • SAN has "some" latency overhead compared to local disk. Can be great for throughput. • DRBD = 50% performance penalty • Replication, when implemented correctly, has no performance penalty • But MySQL replication with disk bound data set has single-threaded issues! • Semi-sync is poorer on WAN compared to async • Galera & NDB provide read/write scale-out, thus more performance 63

  25. Handling failure • How do we find out about failure? • Polling, monitoring, alerts... • Error returned to and handled in client side • What should we do about it? • Direct requests to the spare nodes (or DCs) • How to protect data integrity? • Master-slave is unidirectional: Must ensure there is only one master at all times. • DRBD and SAN have cold-standby: Must mount disks and start mysqld. • In all cases must ensure that 2 disconnected replicas cannot both commit independently. (split brain) 64

  26. Frameworks to handle failure • MySQL-MMM • Severalnines • Tungsten Replicator ClusterControl • 5.6: mysqlfailover, • Orchestrator mysqlrpladmin • MySQL MHA • Replication Manager • Percona Replication Manager 65

  27. MySQL-MMM • You have to setup all nodes and replication manually • MMM gives Monitoring + Automated and manual failover on top • Architecture consists of Monitor and Agents • Typical topology: • 2 master nodes • Read slaves replicate from each master • If a master dies, all slaves connected to it are stale • http://mysql-mmm.org/ 66

  28. Severalnines ClusterControl • Started as automated deployment of MySQL NDB Cluster • now: 4 node cluster up and running in 5 min! • Now supports • MySQL replication and Galera • Semi-sync replication • Automated failover • Manual failovers, status check, start & stop of node, replication, full cluster... from single command line. • Monitoring • Topology: Pair of semi-sync masters, additional read-only slaves • Can move slaves to new master • http://severalnines.com/ 67

  29. ClusterControl II • Handles deployment: on-premise, EC2, or hybrid (Rackspace, etc.) • Adding HAProxy as a Galera load balancer • Hot backups, online software upgrades • Workload simulation • Monitoring (real-time), health reports 68

  30. Orchestrator • Reads replication topologies, keeps state, continuous polling • Modify your topology — move slaves around • Nice GUI, JSON API, CLI 69

  31. MySQL MHA • Like MMM, specialized solution for MySQL replication • Developed by Yoshinori Matsunobu at DeNA • Automated and manual failover options • Topology: 1 master, many slaves • Choose new master by comparing slave binlog positions • Can be used in conjunction with other solutions • http://code.google.com/p/mysql-master-ha/ 70

  32. Cluster suites • Heartbeat, Pacemaker, Red Hat Cluster Suite • Generic, can be used to cluster any server daemon • Usually used in conjunction with Shared Disk or Replicated Disk solutions (preferred) • Can be used with replication. • Robust, Node Fencing / STONITH 71

  33. Pacemaker • Heartbeat, Corosync, Pacemaker • Resource Agents, Percona-PRM • Percona Replication Manager - cluster, geographical disaster recovery options • Pacemaker agent specialised on MySQL replication • https://github.com/percona/percona-pacemaker-agents/ • Pacemaker Resource Agents 3.9.3+ include Percona Replication Manager (PRM) 72

  34. VM based failover • VMWare, Oracle VM, etc can migrate / failover the entire VM guest • This isn’t the focus of the talk 73

  35. Load Balancers for multi-master clusters • Synchronous multi-master clusters like Galera require load balancers • HAProxy • Galera Load Balancer (GLB) • MaxScale • ProxySQL 74

  36. What is a proxy? • Lightweight application between the MySQL clients and the server • Man-in-the-middle between client/server • Communicate with one or more clients/ servers

  37. Image via Giuseppe Maxia

  38. MySQL Proxy - ten years ago! • The first proxy, which had an embedded Lua interpreter • It is used in MySQL Enterprise Monitor • Lua was flexible to allow you to rewrite queries, add statements, filter, etc. • 2007-2014

  39. MariaDB MaxScale 1.0…1.4.x • GA January 2015 • The “Swiss Army Knife” - pluggable router with an extensible architecture • Logging, writing to other backends (besides MySQL), firewall filter, routing via hints, query rewriting • Binlog Server - popularised by booking.com to not have intermediate masters • Popular use case: sitting in front of a 3-node Galera Cluster

  40. MariaDB MaxScale ecosystem • First known plugin: Kafka backend written by Yves Trudeau • https://www.percona.com/blog/2015/06/08/maxscale-a-new-tool-to-solve-your- mysql-scalability-problems/ • First known credible fork: AirBnB MaxScale 1.3 • connection pooling (not 1:1, multiplexed N:M, N>M connections), requests throttling, denylist query rejection, monitoring

  41. MariaDB MaxScale 2.0 • Same Github repository, unlinked against MySQL client libraries (replaced with SQLite), CDC to Kafka, binlog events to Avro/JSON • License change from GPLv2 to Business Source License (BSL)

  42. MariaDB MaxScale 2.1 beta • Dynamic (re)configuration • Performance

  43. MySQL Router - GPLv2 • GA October 2015 • Transparent routing between applications and any backend MySQL servers • Pluggable architecture via the MySQL Harness • Failover, load balancing • This is how you manage MySQL InnoDB Cluster with mysqlsh - https://www.youtube.com/watch? v=JWy7ZLXxtZ4

  44. ProxySQL - GPLv3 • Stable December 2015 • Seamless failover (including query rerouting), load balancing • ProxySQL - included with Percona XtraDB Cluster 5.7, proxysql- • Query caching admin tool available for PXC • Query rewriting configurations • Query blocking (database aware • Improve database operations, firewall) understand and solve • Query mirroring (cache warming) performance issues, HA to DB • Query throttling and timeouts topology • Runtime reconfigurable • Connection Pooling & Multiplexing • Monitoring built-in • Read/Write Split and Sharding

  45. Comparison • http://www.proxysql.com/compare

  46. ProxySQL missing features from MariaDB MaxScale • Front-end SSL encryption (client -> SSL -> proxy -> application) - issue#891 • Binlog router • Streaming binlogs to Kafka • use Maxwell’s Daemon: http://maxwells- daemon.io/ • Binlogs to Avro

  47. ProxySQL Resources • Marco Tusa: https://tusacentral.net/joomla/ index.php/mysql-blogs • SeveralNines: https://severalnines.com/blog? keywords=%23proxysql • Pythian: https://www.pythian.com/blog/tag/proxysql/ • Percona: https://www.percona.com/blog/category/ proxysql/

  48. Health of these projects • MariaDB MaxScale: 142 watchers, 670 stars, 199 forks, 19 contributors • MySQL Router: 25 watchers, 47 stars, 30 forks, 8 contributors • ProxySQL: 119 watchers, 951 stars, 145 forks, 25 contributors

  49. Punch cards

  50. What do you use? • MySQL Router is clearly very interesting going forward, especially with the advent of the MySQL InnoDB Cluster • ProxySQL is a great choice today, has wide use, also has Percona Monitoring & Management (PMM) integration • MariaDB MaxScale pre-2.0 if you really need a binlog router • Server you’re using?

  51. Resources • ProxySQL: https://groups.google.com/forum/#!forum/ proxysql • MariaDB MaxScale: https://groups.google.com/ forum/#!forum/maxscale • MySQL Router: https://forums.mysql.com/list.php? 146 • Vitess: https://groups.google.com/forum/#!forum/ vitess

  52. JDBC/PHP drivers • JDBC - multi-host failover feature (just specify master/slave hosts in the properties) • true for MariaDB Java Connector too • PHP handles this too - mysqlnd_ms • Can handle read-write splitting, round robin or random host selection, and more 94

  53. Clustering: solution or part of problem? • "Causes of Downtime in Production MySQL Servers" whitepaper, Baron Schwartz, VividCortex • Human error • SAN • Clustering framework + SAN = more problems • Galera is replication based, has no false positives as there’s no “failover” moment, you don’t need a clustering framework (JDBC or PHP can load balance), and is relatively elegant overall 95

  54. InnoDB based? • Use InnoDB, continue using InnoDB, know workarounds to InnoDB • All solutions but NDB are InnoDB. NDB is great for telco/ session management for high bandwidth sites, but setup, maintenance, etc. is complex 96

  55. Replication type • Competence choices • GTID helps tremendously • Replication: MySQL DBA manages • Performance • DRBD: Linux admin manages • SAN has higher latency than local disk • SAN: requires domain controller • DRBD has higher latency than local disk • Operations • Replication has little overhead • DRBD (disk level) = cold standby = longer • Redundancy failover • Shared disk = SPoF • Replication = hot standby = shorter failover • Shared nothing = redundant 97

  56. SBR vs RBR? Async vs sync? • row based: deterministic • statement based: dangerous • GTID: easier setup & failover of complex topologies • async: data loss in failover • sync: best • multi-threaded slaves: scalability (hello 5.6+, Tungsten) 98

  57. Conclusions for choice • Simpler is better • MySQL replication > DRBD > SAN • Sync replication = no data loss • Async replication = no latency (WAN) • Sync multi-master = no failover required • Multi-threaded slaves help in disk-bound workloads • GTID increases operational usability • Galera provides all this with good performance & stability 99

  58. Deep-dive: MHA 100

Recommend


More recommend