sparksql
play

SparkSQL 1 Where are we? Pig Latin HiveQL Pig Hive ??? - PowerPoint PPT Presentation

SparkSQL 1 Where are we? Pig Latin HiveQL Pig Hive ??? Hadoop MapReduce Spark RDD HDFS 2 Where are we? SQL Pig Latin HiveQL Pig Hive ??? Hadoop MapReduce Spark RDD HDFS 3 Shark (Spark on Hive) A small side project


  1. SparkSQL 1

  2. Where are we? Pig Latin HiveQL Pig Hive … ??? Hadoop MapReduce Spark RDD HDFS 2

  3. Where are we? SQL Pig Latin HiveQL Pig Hive … ??? Hadoop MapReduce Spark RDD HDFS 3

  4. Shark (Spark on Hive) A small side project that aimed to running RDD jobs on Hive data using HiveQL Still limited to the data model of Hive Tied to the Hadoop world 4

  5. SparkSQL Redesigned to consider Spark query model Supports all the popular relational operators Can be intermixed with RDD operations Uses the Dataframe API as an enhancement to the RDD API Dataframe = RDD + schema 5

  6. Dataframes SparkSQL’s counterpart to relations or tables in RDBMS Consists of rows and columns A dataframe is NOT in 1NF Why? Can be created from various data sources CSV file JSON file MySQL database Hive 6

  7. RDD Vs Dataframe host url method bytes … Q: Find the total number of bytes SQL: SELECT SUM(bytes) FROM Log; RDD SparkSQL spark.textfile(“input”) spark.csv(“input”) .map(/* split by tab*/) .groupBy() .map(/* keep the bytes */) .sum("bytes") .reduce((a,b) a+b) 7

  8. RDD Vs Dataframe RDD SQL r r … int sum = 0 foreach value x Millions of sum += x function calls return sum … r 8

  9. Dataframe Vs RDD Dataframe RDD Lazy execution Lazy execution Spark is aware of The data model is the data model hidden from Spark Spark is aware of The transformations the query logic and actions are black boxes Cannot optimize the Can optimize the query query 9

  10. Built-in operations in SprkSQL Filter (Selection) Select (Projection) Join GroupBy (Aggregation) Load/Store in various formats Cache Conversion between RDD (back and forth) 10

  11. SparkSQL Examples 11

  12. Project Setup # In dependencies pom.xml <!-- https://mvnrepository.com/artifact/org.apache.s park/spark-sql --> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.2.1</version> </dependency> 12

  13. Code Setup SparkSession sparkS = SparkSession .builder() .appName("Spark SQL examples") .master("local") .getOrCreate(); Dataset<Row> log_file = sparkS.read() .option("delimiter", "\t") .option("header", "true") .option("inferSchema", "true") .csv("nasa_log.tsv"); log_file.show(); 13

  14. Filter Example # Select OK lines Dataset<Row> ok_lines = log_file.filter("response=200"); long ok_count = ok_lines.count(); System.out.println("Number of OK lines is "+ok_count); # Grouped aggregation using SQL Dataset<Row> bytesPerCode = log_file.sqlContext().sql("SELECT response, sum(bytes) from log_lines GROUP BY response"); 14

  15. SparkSQL Features Catalyst query optimizer Code generation Integration with libraries 15

  16. SparkSQL Query Plan Logical Physical Code Analysis Optimization Planning Generation SQL AST Cost Model Unresolved Optimized Selected Physical Logical Plan RDDs Logical Plan Logical Plan Physical Plan Plans DataFrame Catalog DataFrames and SQL share the same optimization/execution pipeline 16 Credits: M. Armbrust

  17. Catalyst Query Optimizer Extensible rule-based optimizer Users can define their own rules Original Filter Plan Push-Down Project Project name name Filter Project id = 1 id,name Project Filter id,name id = 1 People People 17

  18. Code Generation Shift from black-box UDF to Expressions Example # Filter Dataset<Row> ok_lines = log_file.filter("response=200"); # Grouped aggregation Dataset<Row> bytesPerCode = log_file.sqlContext().sql("SELECT response, sum(bytes) from log_lines GROUP BY response"); SparkSQL understands the logic of user queries and rewrites them in a more concise way 18

  19. Integration SparkSQL is integrated with other high-level interfaces such as MLlib, PySpark, and SparkR SparkSQL is also integrated with the RDD interface and they can be mixed in one program 19

  20. Further Reading Documentation http://spark.apache.org/docs/latest/sql- programming-guide.html SparkSQL paper M. Armbrust et al . "Spark sql: Relational Data Processing in Spark." SIGMOD 2015 20

Recommend


More recommend