continuations all the way down
play

Continuations, All The Way Down Tim Humphries Ambiata - PowerPoint PPT Presentation

Continuations, All The Way Down Tim Humphries Ambiata @thumphriees teh.id.au Hi! My names Tim. Im an engineer at Ambiata, here in Sydney. I gave my talk the wrong title. Im not going to spend much time talking about CPS, shift/reset,


  1. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  2. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  3. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  4. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  5. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  6. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  7. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  8. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  9. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2: Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  10. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2: Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  11. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2: Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  12. ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2: 3:… Both of these expressions have a spine comprised of two appends . If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same . With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

  13. xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends . The RHS is significantly worse. Almost twice as many operations.

  14. xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends . The RHS is significantly worse. Almost twice as many operations.

  15. xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends . The RHS is significantly worse. Almost twice as many operations.

  16. xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys worse (additional 30-odd steps) This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends . The RHS is significantly worse. Almost twice as many operations.

  17. doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x] A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

  18. doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x] A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

  19. doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x] snd . runWriter A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

  20. doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x] snd . runWriter A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

  21. doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x] awful When we run this as a benchmark, we see performance is pathological.

  22. . (compose) I’m going to solve this problem using the greatest function of all, compose.

  23. ghci> :t ([1..10] ++) What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  24. ghci> :t ([1..10] ++) suspended append What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  25. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  26. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append ghci> ([1..10] ++) $ [] What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  27. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  28. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  29. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) [Int] -> [Int] What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  30. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) [Int] -> [Int] ghci> ([1..10] ++) . ([11..20] ++) $ [] What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  31. ghci> :t ([1..10] ++) [Int] -> [Int] suspended append ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) [Int] -> [Int] ghci> ([1..10] ++) . ([11..20] ++) $ [] [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 ,18,19,20] What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list . We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

  32. ([1..10] ++) . ([11..20] ++) $ [] Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

  33. ([1..10] ++) . ([11..20] ++) $ [] Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

  34. ([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

  35. ([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

  36. ([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

  37. ([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

  38. (([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ [] f . g = \x -> f (g x) f $ x = f x It still works when we create a left-associated append chain. Compose will reorient everything into the right-biased tree. The tree is even bigger.

  39. (([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ [] f . g = \x -> f (g x) f $ x = f x It still works when we create a left-associated append chain. Compose will reorient everything into the right-biased tree. The tree is even bigger.

  40. (([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ [] substitute f . g = \x -> f (g x) f $ x = f x We substitute in for compose.

  41. (([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ [] f . g = \x -> f (g x) f $ x = f x We proceed as we did before; we now have a right-biased append chain expecting a final function. We compose again.

  42. (([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ [] f . g = \x -> f (g x) f $ x = f x Things have propagated in a way that you may have found surprising. Again, this works because of the way compose is defined. Try it at home.

  43. (([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ [] f . g = \x -> f (g x) f $ x = f x We get the good tree at the end. Success!

  44. Monoid w => Monad (Writer w a) We can’t use functions in a Writer, so we haven’t really fixed our problem yet. We need a type.

  45. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  46. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  47. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  48. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  49. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) suspended append My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  50. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) suspended append toList :: AppendK a -> [a] My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  51. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) suspended append toList :: AppendK a -> [a] toList (AppendK k) = My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  52. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) suspended append toList :: AppendK a -> [a] toList (AppendK k) = k [] My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  53. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) suspended append toList :: AppendK a -> [a] toList (AppendK k) = k [] My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

  54. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

  55. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

  56. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) instance Monoid (AppendK a) where mappend = append mempty = AppendK id Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

  57. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) instance Monoid (AppendK a) where mappend = append mempty = AppendK id doit_rec :: Int -> AppendK String () Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

  58. newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) ~linear time instance Monoid (AppendK a) where mappend = append mempty = AppendK id doit_rec :: Int -> AppendK String () Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

  59. newtype AppendK m a = AppendK { unAppendK :: m a -> m a } fromList :: Monoid m => m a -> AppendK m a fromList xs = AppendK (xs <>) toList :: Monoid m => AppendK m a -> m a toList (AppendK k) = k mempty (Benchmark!) The only list functions we were using were append and mempty. We can generalise AppendK for any Monoid. It might not be an optimisation for all Monoids! Benchmark first.

  60. <$> (fmap) Time to talk about my second-favourite function, fmap.

  61. fmap isEven . (fmap (+1) [1..100]) fmap (isEven . (+1)) [1..100]) Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused . This changes the way they evaluate.

  62. let xs = fmap isEven . (fmap (+1) [1..100]) ys = fmap (isEven . (+1)) [1..100]) in xs == ys Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused . This changes the way they evaluate.

  63. let xs = fmap isEven . (fmap (+1) [1..100]) ys = fmap (isEven . (+1)) [1..100]) in xs == ys -- True Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused . This changes the way they evaluate.

  64. let xs = fmap isEven . (fmap (+1) [1..100]) ys = fmap (isEven . (+1)) [1..100]) in xs == ys -- True fmap f . fmap g == fmap (f . g) Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused . This changes the way they evaluate.

  65. let xs = fmap isEven . (fmap (+1) [1..100]) ys = fmap (isEven . (+1)) [1..100]) in xs == ys -- True fmap f . fmap g == fmap (f . g) Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused . This changes the way they evaluate.

  66. fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  67. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  68. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  69. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs tree shape never changes (fmap rebuilds it) fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  70. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  71. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  72. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  73. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  74. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  75. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): isEven (2+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  76. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): isEven (2+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  77. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): isEven (2+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  78. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): isEven (2+1): isEven (3+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  79. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  80. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  81. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  82. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

  83. instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs isEven (1+1): fmap f . fmap g == fmap (f . g) Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine , made out of fmaps , does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

Recommend


More recommend