spark
play

Spark Stony Brook University CSE545, Spring 2019 Situations where - PowerPoint PPT Presentation

Spark Stony Brook University CSE545, Spring 2019 Situations where MapReduce is not efficient Long pipelines sharing data Interactive applications Streaming applications Iterative algorithms (optimization problems) DFS Map


  1. Spark Stony Brook University CSE545, Spring 2019

  2. Situations where MapReduce is not efficient ● Long pipelines sharing data ● Interactive applications ● Streaming applications ● Iterative algorithms (optimization problems) DFS Map LocalFS Network Reduce DFS Map ... (Anytime where MapReduce would need to write and read from disk a lot).

  3. Situations where MapReduce is not efficient ● Long pipelines sharing data ● Interactive applications ● Streaming applications ● Iterative algorithms (optimization problems) DFS Map LocalFS Network Reduce DFS Map ... (Anytime where MapReduce would need to write and read from disk a lot).

  4. Situations where MapReduce is not efficient ● Long pipelines sharing data ● Interactive applications ● Streaming applications ● Iterative algorithms (optimization problems) DFS Map LocalFS Network Reduce DFS Map ... (Anytime where MapReduce would need to write and read from disk a lot).

  5. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s).

  6. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). RDD1 Create RDD dfs:// (DATA) filename

  7. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). RDD1 RDD2 transformation1() dfs:// (DATA) (DATA) filename created from dfs://filename

  8. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). RDD1 RDD2 RDD3 transformation2() dfs:// (can drop (DATA) (DATA) the data) filename created from transformation1 transformation2 from RDD1 from RDD2 dfs://filename

  9. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). ● Enables rebuilding datasets on the fly. ● Intermediate datasets not stored on disk (and only in memory if needed and enough space) Faster communication and I O

  10. The Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). “Stable Storage” Other RDDs

  11. The Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). map filter join ...

  12. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). RDD1 RDD2 RDD3 transformation2() dfs:// (DATA) (DATA) filename created from transformation1 transformation2 from RDD1 from RDD2 dfs://filename

  13. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). RDD1 RDD2 RDD3 transformation2() dfs:// (DATA) filename created from transformation1 transformation2 from RDD1 from RDD2 dfs://filename

  14. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of RDD4 ) ( transformations from other dataset(s). 3 (DATA) n o i t a m transformation3 r o from RDD2 f s n a r t RDD1 RDD2 RDD3 transformation2() dfs:// (will recreate (DATA) data) filename created from transformation1 transformation2 from RDD1 from RDD2 dfs://filename

  15. Spark’s Big Idea Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of RDD4 ) ( transformations from other dataset(s). 3 (DATA) n o i t a m transformation3 r o from RDD2 f s n a r t RDD1 RDD2 RDD3 transformation2() dfs:// (will recreate (DATA) data) filename created from transformation1 transformation2 from RDD1 from RDD2 dfs://filename

  16. Original Transformations: RDD to RDD Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  17. Original Transformations: RDD to RDD Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record Mul���l� Re���d� of how the dataset was created as combination of transformations from other dataset(s). Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  18. Original Transformations: RDD to RDD Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  19. Original Transformations : RDD to RDD Resilient Distributed Datasets (RDDs) -- Read-only partitioned collection of records (like a DFS) but with a record of how the dataset was created as combination of transformations from other dataset(s). Original Actions : RDD to Value, Object, or Storage Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  20. Current Transformations and Actions http://spark.apache.org/docs/latest/rdd-programming-guide.html#transformations common transformations: filter, map, flatMap, reduceByKey, groupByKey http://spark.apache.org/docs/latest/rdd-programming-guide.html#actions common actions: collect, count, take

  21. An Example Count errors in a log file: lines filter.(_.startsWith(“ERROR”)) TYPE MESSAGE TIME errors count() Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  22. An Example Count errors in a log file: lines filter.(_.startsWith(“ERROR”)) TYPE MESSAGE TIME errors Pseudocode: count() lines = sc.textFile(“dfs:...”) errors = lines.filter(_.startswith(“ERROR”)) errors.count Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  23. An Example Collect times of hdfs-related errors lines filter.(_.startsWith(“ERROR”)) TYPE MESSAGE TIME errors Pseudocode: lines = sc.textFile(“dfs:...”) errors = lines.filter(_.startswith(“ERROR”)) errors.persist errors.count ... Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

  24. An Example Collect times of hdfs-related errors Persistance lines filter.(_.startsWith(“ERROR”)) TYPE MESSAGE TIME Can specify that an RDD “persists” in memory so other queries can errors use it. Pseudocode: filter.(_.contains(“HDFS”)) Can specify a priority for HDFS errors lines = sc.textFile(“dfs:...”) persistance; lower priority => errors = moves to disk, if needed, earlier map.(_.split(‘\t’)(3)) lines.filter(_.startswith(“ERROR”)) errors.persist time fields errors.count ... collect() Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica. “Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing.”. NSDI 2012 . April 2012.

Recommend


More recommend