make htap real with tiflash
play

Make HTAP Real with TiFlash A TiDB native Columnar Extension About - PowerPoint PPT Presentation

Make HTAP Real with TiFlash A TiDB native Columnar Extension About me Liu Cong, Technical Director, Analytical Product@PingCAP Previously Principal Enginer@QiniuCloud Technical Director@Kingsoft Focus on


  1. Make HTAP Real with TiFlash A TiDB native Columnar Extension

  2. About me ● Liu Cong, 刘 聪 ● Technical Director, Analytical Product@PingCAP ● Previously ○ Principal Enginer@QiniuCloud ○ Technical Director@Kingsoft ● Focus on distributed system and database engine

  3. Traditional Data Platform Data Warehouse OLTP Hadoop OLTP Big Data OLTP Data Lake ETL Compute Engine NoSQL Analytical Database Traditional data platform relies on complex architecture moving data around via ETL. This introduces maintenance cost and delay of data arrival in data warehouse.

  4. Fundamental Conflicts ● Large / batch process vs point / short access ○ Row format for OLTP ○ Columnar format for OLAP ● Workload Interference ○ A single large analytical query might cause disaster for your OLTP workload

  5. A Popular Solution Use different types of databases ● For live and fast data, use an OLTP specialized ○ database For historical data, use Hadoop / analytical database ○ Offload data via the ETL process into your Hadoop cluster ● or analytical database Maybe once per day ○

  6. Good enough, really?

  7. TiFlash Extension

  8. What’s TiFlash Extension TiFlash is an extended analytical engine for TiDB ● Powered by columnar storage and vectorized compute engine ● Tightly integrated with TiDB ● Clear isolation of workload not impacting OLTP ● Partially based on ClickHouse with tons of modifications ● Speed up read for both TiSpark and TiDB ●

  9. Architecture Spark Cluster TiDB TiSpark TiSpark TiDB Worker Worker TiFlash Node 2 TiFlash Node 1 TiKV Node 1 TiKV Node 2 TiKV Node 3 Store 1 Store 2 Store 3 Region 1 Region 4 Region 2 Region 2 Region 3 Region 3 Region 3 Region 2 Region 4 Region 4 Region 1 Region 1 TiFlash Extension Cluster TiKV Cluster

  10. Columnstore vs Rowstore Columnar Storage stores data in columns instead of rows ● Suitable for analytical workload ○ Possible for column pruning ■ Compression made possible and further IO reduction ○ ⅕ of average storage requirement ■ Bad small random IO ○ Which is the typical workload for OLTP ■ Rowstore is the classic format for databases ● Researched and optimized for OLTP scenario for decades ○ Cumbersome in analytical use cases ○

  11. Columnstore vs Rowstore Rowstore SELECT avg(age) FROM employee; id name age 0962 Jane 30 7658 John 45 3589 Jim 20 Columnstore 5523 Susan 52 id name age 0962 Jane 30 7658 John 45 Usually you don’t read all columns in a table 3589 Jim 20 performing analytics. 5523 Susan 52 In columnstore, you avoid unnecessary IO while you have to read them all in rowstore.

  12. Raft Learner TiFlash synchronizes data in columnstore via Raft Learner Strong consistency on read enabled by the Raft protocol ● Introduce almost zero overhead for the OLTP workload ● Except the network overhead for sending extra replicas ○ Slight overhead on read (check Raft index for each region in 96 MB by ○ default) Possible for multiple learners to speed up hot data read ○

  13. Raft Learner TiFlash TiKV R e g i o n A Region A Instead of connecting as a Raft Follower, regions in TiFlash act as Raft Learner. TiKV TiKV When data is written, Raft leader does not Region A Region A wait for learner to finish writing. Therefore, TiFlash introduces almost no overhead replicating data.

  14. Raft Learner Raft Leader Raft Learner 4 When being read, Raft Learner sends 3 request to check the Raft log index with Leader to see if its data is up-to-date.

  15. Raft Learner Raft Leader Raft Learner 4 After data catches up via Raft log, 4 Learner serves the read request then.

  16. TiFlash is beyond columnar format

  17. Scalability ● An HTAP database needs to store huge amount of data ● Scalability is very important ● TiDB relies on multi-raft for scalability ○ One command to add / remove node ○ Scaling is fully automatic ○ Smooth and painless data rebalance ● TiFlash adopts the same design

  18. Isolation Perfect resource isolation ● Data rebalance based on the “label” mechanism ● Dedicated nodes for TiFlash / Columnstore ○ TiFlash nodes have their own AP label ○ Rebalance between AP label nodes ○ Computation Isolation is possible by nature ● Use a different set of compute nodes ○ Read only from nodes with AP label ○

  19. Isolation Region 1 AP label constrained TiDB Peer 1 Label: TP / Peer 2 Label: TP TiSpark Peer 3 Label: TP Peer 4 Label: AP Label: AP Label: TP TiFlash Node 2 TiFlash Node 1 TiKV Node 1 TiKV Node 2 TiKV Node 3 Store 1 Store 2 Store 3 Region 1 Region 4 Region 2 Region 2 Region 3 Region 3 Region 3 Region 2 Region 4 Region 4 Region 1 Region 1 TiFlash Extension Cluster TiKV Cluster

  20. Integration ● TiFlash Tightly Integrated with TiDB / TiSpark ○ TiDB / TiSpark might choose to read from either side ■ Based on cost ○ When reading TiFlash replica failed, read TiKV replica transparently ○ Join data from both sides in a single query

  21. Integration SELECT AVG(s.price) FROM product p, sales s TiDB WHERE p.pid = s.pid / AND p.batch_id = ‘B1328’; TiSpark Index Scan(batch_id = B1328) TableScan(price, pid) TiFlash Node 2 TiFlash Node 1 TiKV Node 1 TiKV Node 2 TiKV Node 3 Store 1 Store 2 Store 3 Region 1 Region 4 Region 2 Region 2 Region 3 Region 3 Region 3 Region 2 Region 4 Region 4 Region 1 Region 1 TiFlash Extension Cluster TiKV Cluster

  22. MPP Support TiFlash nodes form a MPP cluster by themselves ● Full computation support on MPP layer ● Speed up TiDB since it is not MPP design ○ Speed up TiSpark by avoiding writing disk during shuffle ○

  23. MPP Support TiDB / Coordinator TiSpark TiFlash nodes exchange data and enable complex operators like Plan Segment distributed join. TiFlash Node 1 TiFlash Node 2 TiFlash Node 3 MPP Worker MPP Worker MPP Worker

  24. Performance Comparable performance against vanilla Spark on Hadoop + ● Parquet Benchmarked with Pre-Alpha version of TiFlash + Spark ○ (without MPP support) TPC-H 100 ○

  25. Performance Parquet TiFlash

  26. TiDB Data Platform

  27. Traditional Data Platform Data Warehouse OLTP Hadoop OLTP Big Data OLTP Data Lake Compute ETL Engine NoSQL Analytical Database Traditional data platform relies on complex architecture moving data around via ETL. This introduces maintenance cost and delay of data arrival in data warehouse.

  28. TiDB Data Platform Data Warehouse OLTP Hadoop OLTP Big Data OLTP Data Lake TiDB with TiFlash Extension Compute ETL Engine NoSQL Analytical Database Traditional data platform relies on complex architecture moving data around via ETL. This introduces maintenance cost and delay of data arrival in data warehouse.

  29. Fundamental Change “What happened yesterday” vs “What’s going on right now” ● Realtime report for sales campaign and adjust price in no time ● Risk management with up-to-date info always ○ Very fast paced replenishment based on live data and ○ prediction

  30. Roadmap Beta / User POC in May, 2019 ● With columnar engine and isolation ready ○ Access only via Spark ■ GA, By the end of 2019 ● Unified coprocessor layer ○ Ready for both TiDB / TiSpark ■ Cost based access path selection ■ Possibly MPP layer done ○

  31. Thanks! Contact us: www.pingcap.com

Recommend


More recommend