mysql high availability solutions
play

MySQL High Availability Solutions Alex Poritskiy Percona The Five - PowerPoint PPT Presentation

MySQL High Availability Solutions Alex Poritskiy Percona The Five 9s of Availability Clustering & disasters Geographical Redundancy power failures network failures Clustering Technologies hardware failures software failures


  1. MySQL High Availability Solutions Alex Poritskiy 
 Percona

  2. The Five 9s of Availability Clustering & disasters Geographical Redundancy power failures network failures Clustering Technologies hardware failures software failures Replication Technologies maintenance operations hardware upgrades Well-Managed software upgrades Unmanaged 35 days 4 days 8 hours 50 mins 5 mins Telco ISPs & Small Data Banking Military Mainstream Business Centers Medical Defense Business 9 9 9 9 9 % � 2

  3. MySQL High Availability Solutions ● MySQL Replication ● MHA/Orchestrator by Github and MySQL Replication ● Percona XtraDB Cluster/Galera ● MySQL InnoDB Cluster (Group Replication) ● MySQL NDB Cluster � 3

  4. MySQL Replication 9 9 9 % � 4

  5. MySQL Replication Topologies Master > Slave Master > Slaves Master > Slave > Slaves Masters > Slave (Multi-Source) Circular (Multi-Master) Master < > Master (Multi-Master) � 5

  6. Inside MySQL Replication Writes & Reads Web/App Web/App Server Server Writes relay binlog mysqld I/O SQL Thread Thread index & binlogs data Replication binlog data mysqld MySQL Slave MySQL Master � 6

  7. MySQL Replication (Scale-Out) Web/App Web/App Server Server Writes & Reads Reads Master Server Slave Server Backups Replication Writes Writes Possible Roles Index & • Fail over server Bin Log • Used for performing backups Rotation • Read load balancing • Additional slaves allow Scale-Out � 7

  8. MySQL Replication(Fail-Over) Web/App Web/App Server Server X Manual Writes & Reads Fail Over X Master Server Slave Server r e v O l i a F X Replication Writes Writes X Index & Bin Log Rotation � 8

  9. MySQL Replication Pros ● Mature, standard feature of MySQL. ● Compatible with many topologies. ● Supports multi-datacenter “clusters”. ● Easy to configure. ● Continues to gain new features: ▪ Semi-synchronous ▪ Row-based binary logs Parallel SQL threads ▪ ▪ Global Transaction ID (GTID) � 9

  10. MySQL Replication Cons ● Slave lag. ▪ Use row-based binary logs and other strategies. ● Slave drift. ▪ Use read-only, pt-table-checksum, pt-table-sync. ● No automatic failover. ▪ Use haproxy and/or keepalived or other solutions. � 10

  11. Topology managers + MySQL replication . 9 9 9 % 9 � 11

  12. MHA toolkit � 12

  13. MHA toolkit � 13

  14. MHA toolkit ● Note that MHA never rebuilds an existing multi-master replication link, it always reconfigures topology using master- slave. ● MHA only takes care of the MySQL failover but has the ability to add custom code to make the application and any load balancers aware of the change. � 14

  15. MHA toolkit: Pros ● Uses traditional MySQL replication. ● Easy to deploy for an existing replication group. ● Subscribes slaves to the correct binlog position. ● Strong command line support ● Very good point in time recovery via binary logs � 15

  16. MHA toolkit: Cons ● Disconnects after failover ● No GUI ● Not updated, long term stable release ● Doesn’t crawl and auto discover topology ● Cannot manage PXC, co-masters or active/active ● Failover isn’t guaranteed instantly - can cause delays as logs are scanned. ● Still async replication (Consistency may be compromised) ● Monitor can be a single point of failure (monitor is only needed for failover) � 16

  17. Orchestrator by Github ● Orchestrator crawls through your replication topology and updates its GUI based on your current topology and health. ● Orchestrator also knows about binary log position, GTID, and Binlog Servers. ● Refactoring replication topologies can be a matter of dragging and dropping a replica under another master. � 17

  18. Orchestrator by Github � 18

  19. Orchestrator: Pros ● Integrated into PMM ● GUI in addition to command line ● Various options for managing replication (GTID, psuedo-GTID etc) ● Built for GTID ● Can manage backups and other tasks ● Currently in development with a dedicated team ● Auto Discovers topology ● Works with co-masters � 19

  20. Orchestrator: Cons ● Does not recognise PXC nodes as part of a single cluster (sees them as separate masters) ● Point in time recovery relies on good slaves, doesn’t do binary log discovery � 20

  21. PXC, MySQL InnoDB Cluster . 9 9 9 9 % 9 � 21

  22. Percona XtraDB Cluster(PXC) ● Synchronous replication—every node is writable. � 22

  23. Percona XtraDB Cluster(PXC) � 23

  24. Percona XtraDB Cluster(PXC) 5 nodes in the 2 Datacenters(example) Main Data Center Secondary Data Center MySQL WAN MySQL gmcast.segment=1 gmcast.segment=2 � 24

  25. Percona XtraDB Cluster(PXC) 3 nodes in the single datacenter + Async Slave Main Data Center Secondary Data Center WAN MySQL Slave Server Asynchronous MySQL Replication slave (not part of PXC) Percona XtraDB Cluster � 25

  26. Percona XtraDB Cluster(PXC) : Pros ● Writing to any node updates all other nodes synchronously (like semi-sync replication). ● All nodes can be writers. ● Adding a new node, or recovering an offline node, initialises data automatically. ● Multi-datacenter (assuming low latency) ● Quorum-based selection of nodes that can take queries. Nodes not in quorum will be shut down and must copy data before they can rejoin the cluster. � 26

  27. Percona XtraDB Cluster(PXC): Cons ● Not suitable for large transactions. ● Not suitable for heavy write traffic, especially rapid writes to a hotspot – don't rely on multiple writers. ● Reinitialising a node can be expensive. � 27

  28. MySQL InnoDB Cluster ● Synchronous replication—every node is writable as well. � 28

  29. MySQL InnoDB Cluster: Pros ● Writing to any node updates all other nodes synchronously (like semi-sync replication). ● All nodes can be writers. ● Single-Primary and Multi-Primary modes available ● Quorum-based selection of nodes that can take queries. � 29

  30. MySQL InnoDB Cluster: Cons ● Not suitable for large transactions. ● Not suitable for heavy write traffic, especially rapid writes to a hotspot – don't rely on multiple writers. ● No SST concept - all nodes needs to start with same data ● Replication error just stop replication to the node ● Inconsistencies can/should be fixed manually � 30

  31. Percona XtraDB Cluster and MySQL Cluster similar limitations ● Large & Long running transactions ▪ higher chance on failures ▪ non-efficient replication of large transactions ● Workload hotspots (updating same set of rows in parallel) 
 ● Often writing to 1 node is the best solution � 31

  32. . 9 9 9 9 9 9 % � 32

  33. MySQL NDB Cluster ● Shared-Nothing ● Synchronous (2-phase commit) ● In memory store ● Traditionally used by: ▪ Telecom operators ▪ Gaming companies � 33

  34. MySQL NDB Cluster Basic Architecture MySQL Server or NDB API Web/App Web/App Web/App Web/App for all Writes & Reads Server Server Server Server NDB API NDB API Management Management NDB Memory Server Server Storage Engine & Data Data Data Data Disk Node Node Node Node Management Management Server Server MySQL Cluster � 34

  35. MySQL NDB Cluster Components ● Data Nodes ▪ Store redundant data fragments ▪ Combined to form of Node Groups ● Management Nodes ▪ Used at startup ▪ Re-configuration ● API Nodes ▪ Standard SQL interface ▪ NDB API( The NDB API is an object-oriented application programming interface for NDB Cluster that implements indexes, scans, transactions, and event handling ) � 35

  36. MySQL NDB Cluster Data node Architecture ID ID Capital Capital Country Country UTC UTC Data Data Data Data 1 1 Copenhagen Copenhagen Denmark Denmark 2 2 Node Node Partition 1 Node Node 2 2 Berlin Berlin Germany Germany 2 2 3 3 New York City New York City USA USA -5 -5 P1-Primary P1-Secondary Partition 2 4 4 Tokyo Tokyo Japan Japan 9 9 P2-Secondary P2-Primary 5 5 Athens Athens Greece Greece 2 2 Partition 3 6 6 Moscow Moscow Russia Russia 4 4 Node Group 1 7 7 Oslo Oslo Norway Norway 2 2 Partition 4 8 8 Beijing Beijing China China 8 8 Data Data Data Data Node Node Node Node • Four Data Nodes • Two Replicas P3-Primary P3-Secondary • Two Node Groups P4-Secondary P4-Primary Node Group 2 � 36

  37. MySQL NDB Cluster Data node Architecture X Data Data Data Data Node Node Node Node ID ID Capital Capital Country Country UTC UTC 1 1 Copenhagen Copenhagen Denmark Denmark 2 2 Partition 1 2 2 Berlin Berlin Germany Germany 2 2 P1-Primary P1-Secondary 3 3 New York City New York City USA USA -5 -5 Partition 2 P2-Secondary P2-Primary 4 4 Tokyo Tokyo Japan Japan 9 9 5 5 Athens Athens Greece Greece 2 2 Partition 3 data still 6 6 Moscow Moscow Russia Russia 4 4 Node Group 1 available 7 7 Oslo Oslo Norway Norway 2 2 Partition 4 8 8 Beijing Beijing China China 8 8 X Data Data Data Data Node Node Node Node • Four Data Nodes • Two Replicas P3-Primary P3-Secondary • Two Node Groups P4-Secondary P4-Primary Node Group 2 � 37

Recommend


More recommend