pipelines for data analysis in r
play

Pipelines for data analysis in R Hadley Wickham @hadleywickham - PowerPoint PPT Presentation

Pipelines for data analysis in R Hadley Wickham @hadleywickham Chief Scientist, RStudio October 2015 Data analysis is the process Data analysis is the process by which data becomes by which data becomes understanding, knowledge


  1. Pipelines for 
 data analysis in R Hadley Wickham 
 @hadleywickham Chief Scientist, RStudio October 2015

  2. Data analysis is the process Data analysis is the process by which data becomes by which data becomes understanding, knowledge understanding, knowledge and insight and insight

  3. Data analysis is the process Data analysis is the process by which data becomes by which data becomes understanding, knowledge understanding, knowledge and insight and insight

  4. Import Visualise Surprises, but doesn't scale Tidy Transform Create new variables & new summaries Consistent way of storing data Model Scales, but doesn't (fundamentally) surprise

  5. Import Visualise readr readxl ggplot2 haven ggvis DBI httr Tidy Transform dplyr tidyr m o o r b Model

  6. Pipelines

  7. Cognitive Think it Do it Describe it (precisely) Computational

  8. http://www.flickr.com/photos/mutsmuts/4695658106 Cognition time ≫ Computation time

  9. magrittr:: %>% Inspirations : unix, F#, haskell, clojure, method chaining

  10. foo_foo <- little_bunny() bop_on( scoop_up( hop_through(foo_foo, forest), field_mouse ), head ) # vs foo_foo %>% hop_through(forest) %>% scoop_up(field_mouse) %>% bop_on(head)

  11. x %>% f(y) # f(x, y) x %>% f(z, .) # f(z, x) x %>% f(y) %>% g(z) # g(f(x, y), z) # Turns function composition (hard to read) # into sequence (easy to read)

  12. # Any function can use it. Only needs a simple # property: the type of the first argument # needs to be the same as the type of the result. # tidyr: pipelines for messy -> tidy data # dplyr: pipelines for data manipulation # ggvis: pipelines for visualisations # rvest: pipelines for html # purrr: pipelines for lists # xml2: pipelines for xml # stringr: pipelines for strings

  13. Tidy

  14. Import Visualise readr readxl ggplot2 haven ggvis DBI httr Tidy Transform dplyr tidyr m o o r b Model

  15. Tidy data = data that makes data analysis easy Storage Meaning Table / File Data set Rows Observations Columns Variables

  16. Source: local data frame [5,769 x 22] iso2 year m04 m514 m014 m1524 m2534 m3544 m4554 m5564 m65 mu f04 f514 (chr) (int) (int) (int) (int) (int) (int) (int) (int) (int) (int) (int) (int) (int) 1 AD 1989 NA NA NA NA NA NA NA NA NA NA NA NA 2 AD 1990 NA NA NA NA NA NA NA NA NA NA NA NA 3 AD 1991 NA NA NA NA NA NA NA NA NA NA NA NA 4 AD 1992 NA NA NA NA NA NA NA NA NA NA NA NA 5 AD 1993 NA NA NA NA NA NA NA NA NA NA NA NA 6 AD 1994 NA NA NA NA NA NA NA NA NA NA NA NA 7 AD 1996 NA NA 0 0 0 4 1 0 0 NA NA NA 8 AD 1997 NA NA 0 0 1 2 2 1 6 NA NA NA 9 AD 1998 NA NA 0 0 0 1 0 0 0 NA NA NA 10 AD 1999 NA NA 0 0 0 1 1 0 0 NA NA NA What are the variables in this 11 AD 2000 NA NA 0 0 1 0 0 0 0 NA NA NA dataset? (Hint: f = female, 
 12 AD 2001 NA NA 0 NA NA 2 1 NA NA NA NA NA 13 AD 2002 NA NA 0 0 0 1 0 0 0 NA NA NA u = unknown, 1524 = 15-24) 14 AD 2003 NA NA 0 0 0 1 2 0 0 NA NA NA 15 AD 2004 NA NA 0 0 0 1 1 0 0 NA NA NA 16 AD 2005 0 0 0 0 1 1 0 0 0 0 0 0 .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... Variables not shown: f014 (int), f1524 (int), f2534 (int), f3544 (int), f4554 (int), f5564 (int), f65 (int), fu (int)

  17. # To convert this messy data into tidy data # we need two verbs. First we need to gather # together all the columns that aren't variables tb2 <- tb %>% gather(demo, n, -iso2, -year, na.rm = TRUE) tb2

  18. # Then separate the demographic variable into # sex and age tb3 <- tb2 %>% separate(demo, c("sex", "age"), 1) tb3 # Many tidyr verbs come in pairs: # spread vs. gather # extract/separate vs. unite # nest vs. unnest

  19. Google for “tidyr” & “tidy data”

  20. Transform

  21. Import Visualise readr readxl ggplot2 haven ggvis DBI httr Tidy Transform dplyr tidyr m o o r b Model

  22. Cognitive Think it Do it Describe it (precisely) Computational

  23. One table verbs + g r o u p b y • select : subset variables by name • filter : subset observations by value • mutate : add new variables • summarise : reduce to a single obs • arrange : re-order the observations

  24. Demo

  25. Mutating Filtering Set inner_join() semi_join() intersect() setdiff() anti_join() left_join() right_join() full_join() union()

  26. dplyr sources • Local data frame (C++) • Local data table • Local data cube (experimental) • RDMS: Postgres, MySQL, SQLite, Oracle, MS SQL, JDBC, Impala • MonetDB, BigQuery

  27. Google for “dplyr”

  28. Visualise

  29. Import Visualise readr readxl ggplot2 haven ggvis DBI httr Tidy Transform dplyr tidyr m o o r b Model

  30. What is ggvis? • A grammar of graphics 
 (like ggplot2 ) • Reactive (interactive & dynamic) 
 (like shiny ) • A pipeline (a la dplyr ) • Of the web (drawn with vega )

  31. Demo 4-ggvis.R 4-ggvis.Rmd

  32. Google for “ggvis”

  33. Model with broom, by David Robinson

  34. Import Visualise readr readxl ggplot2 haven ggvis DBI httr Tidy Transform dplyr tidyr m o o r b Model

  35. 46 TX cities, ~25 years of data 7.5 log(sales) 5.0 2.5 What makes it hard to see the long term trend? 1990 1995 2000 2005 2010 2015 date

  36. # Models are useful as tool for removing # known patterns tx <- tx %>% group_by(city) %>% mutate( resid = lm( log(sales) ~ factor(month), na.action = na.exclude ) %>% resid() )

  37. 1 0 resid − 1 − 2 1990 1995 2000 2005 2010 2015 date

  38. # Models are also useful in their own right models <- tx %>% group_by(city) %>% do(mod = lm( log(sales) ~ factor(month), data = ., na.action = na.exclude) )

  39. Model summaries • Model level: one row per model • Coe ffi cient level: one row per coe ffi cient (per model) • Observation level: one row per observation (per model)

  40. Demo 5-broom.R

  41. Google for “broom r”

  42. Big data and R

  43. Can’t fit in memory on Big one computer: >5 TB Fits in memory on a Medium server: 10 GB - 5 TB Fits in memory on a Small laptop: <10 GB R is great at this!

  44. R • R provides an excellent environment for rapid interactive exploration of small data. • There is no technical reason why it can’t also work well with medium size data. (But the work mostly hasn’t been done) • What about big data?

  45. 1. Can be reduced to a small data problem with subsetting/sampling/ summarising (90%) 2. Can be reduced to a very large number of small data problems (9%) 3. Is irreducibly big (1%)

  46. The right small data • Rapid iteration essential • dplyr supports this activity by avoiding cognitive costs of switching between languages.

  47. Lots of small problems • Embarrassingly parallel (e.g. Hadoop) • R wrappers like foreach, rhipe, rhadoop • Challenging is matching architecture of computing to data storage

  48. Irreducibly big • Computation must be performed by specialised system. • Typically C/C++, Fortran, Scala. • R needs to be able to talk to those systems.

  49. Future work

  50. End game Provide a fluent interface where you spent your mental energy on the specific data problem, not general data analysis process. The best tools become invisible with time! Still a lot of work to do, especially on the connection between modelling and visualisation.

  51. Import Visualise readr readxl ggplot2 haven ggvis DBI httr Tidy Transform dplyr tidyr m o o r b Model

Recommend


More recommend