# Reduce is flatMap

I used to think of `reduce` and `flatMap` as two fundamental functions. But could I imagine they were two versions of the same function?

``````flatMap :: (a -> [b]) -> [a] -> [b]
reduce  :: (b -> a -> b) -> b -> [a] -> b
``````

Do they look similar? Not at all.

We are not going to use Functor and Foldable abstractions for now

To begin with, `flatMap` is a terrible name. Yes, `flatMap = flatten . map` but names should not derive from implementation. That's in the basics of naming things.

Describe what function does rather than how it's built...

– But `flatMap` flattens and maps! – you say.

Not exactly. Ironically, it's easy to increase nesting with `flatMap` but impossible to decrease.

``````flatMap (\x -> x)       [1, 2, 3] -- -1 nesting level... ERROR
flatMap (\x -> [x])     [1, 2, 3] --  0 same nesting
flatMap (\x -> [[x]])   [1, 2, 3] -- +1 nesting level
flatMap (\x -> [[[x]]]) [1, 2, 3] -- +2 nesting levels
``````

Now let's look at implementations:

``````flatMap :: (a -> [b]) -> [a] -> [b]
flatMap f [] = []
flatMap f (x : xs) = f x ++ flatMap xs
``````
``````reduce :: (a -> b -> b) -> b -> [a] -> b
reduce f z [] = z
reduce f z (x : xs) = f x (foldr f z xs)
``````

In ECMAScript `Array.reduce` accepts arguments in the `acc, item` order. In Haskell there are plenty of `reduce` versions. Some of them are left- and some are right-associative.

``````reduceFn :: b -> a -> b -- b is acc type
-- or --
reduceFn :: a -> b -> b -- --//--
``````

Technically speaking, they are equivalent. Who cares about argument order nowadays? But second version is interesting because it returns an endofunctor `b -> b`. Keeping that in mind, let's move along.

You probably tried to express `flatMap` in terms of `reduce`.
It's rather trivial

``````flatMap f = reduce ((++) . f) []]
``````

Now let's try to express `reduce` in terms of `map`.

``````reduce f z xs = ???
``````

The only thing that comes to mind is

``````reduce f z xs = ??? (map f xs) ???
``````

but is this even possible, given that `f` for `map` and `f` for `reduce` have different number of arguments?

In second thought – yes. We just end up with a list of functions

``````map (+) [1, 2, 3] -- [(+1), (+2), (+3)]
``````

Looks like a middle step. Now we just need to somehow merge those functions together. And also do something about initial value.

We merge functions by composing them. So we can write `compose` function

``````compose :: [(b -> b)] -> (b -> b)
compose [] = id
compose (f : fs) = f . (compose fs)
``````

which basically does `f5 . f4 . f3 ...` for imaginary list of five functions.

So if we compose funcs from that list and apply the result to initial value...
should it reproduce the original `reduce` behavior? Let's check.

``````reduce f z xs = compose (map f xs) z
---
reduce (+) 0 [1, 2, 3] -- 6
reduce (*) 0 [1, 2, 3] -- 6
``````

It works! Interesting.

Now `flatMap` is like

``````flatMap f xs = ... ++ (ys !! 2) ++ (ys !! 1) ++ (ys !! 0)
where ys = map f xs
``````

while `reduce` is like

``````reduce f z xs = ... . (ys !! 2) . (ys !! 1) . (ys !! 0) \$ z
where ys = map f xs
``````

And if we remove accumulator and just return a final function

``````reduce f xs = ... . (ys !! 2) . (ys !! 1) . (ys !! 0)
where ys = map f xs
``````

Oops. Now it's obvious the only difference between `flatMap` and `reduce` is a "glue" operation. It's `++` for `flatMap` and `.` for `reduce`.

In this particular case `++` "glues" lists and `.` "glues" functions. Which functions? It depends on the `reduceFn` argument order and is either `a -> b` or `b -> b`. We'll concentrate on the second case.

`b -> b` is an endofunctor and a straighforward monoid.

List has an identity element of `[]` (empty list) and endofunctor has an identity element of `id` (identity) function.

``````[] ++ [] ++ [] == []

id . id . id == id
``````

Can't we rely on that to abstract things furher? Yes we can.

`````` ++  ++  == mconcat [, , ] == [1, 2, 3]

(+1) . (+2) . (+3) == mconcat [(+1), (+2), (+3)] == (\x -> ((x + 3) + 2) + 1)
``````

Well, the latter actually does not work... Because in Haskell `a -> a` has no predefined monoid instance. The one of possible workarounds is `Endo`.

``````import Data.Monoid
mconcat [Endo (+1), Endo (+2), Endo (+3)] `appEndo` 0 == 6
``````

So we have this candidate function which should behave (almost) like both `flatMap` and `reduce` given the right arguments.

``````fancyMap f xs = mconcat (map f xs)
``````

Check:

``````fancyMap (\x -> [2 * x]) [1, 2, 3] -- [2, 4, 6]

fancyMap (Endo . (+)) [1, 2, 3] `appEndo` 0 -- 6
``````

We can also mess with GHC a tiny bit

``````{-# LANGUAGE FlexibleInstances #-}

instance {-# OVERLAPPING #-} Monoid (a -> a) where
mempty = id
mappend f g = f . g
``````

Behold:

``````fancyMap (\x -> [2 * x]) [1, 2, 3] == [2, 4, 6]

fancyMap (+) [1, 2, 3] 0 == 6
``````

Now you see how `flatMap` and `reduce` are the same thing ;)

We can definitely abstract remaining list stuff

``````import Data.Monoid
import Data.Foldable

fmap :: Functor f => (a -> b) -> f a -> f b -- abstracted `map`
fold :: (Foldable t, Monoid m) => t m -> m  -- abstracted `mconcat`

fancyMap :: (Monoid m, Functor f, Foldable f) => (a -> m) -> f a -> m
fancyMap f xs = fold (fmap f xs)
``````

and Haskell goes even further, removing `Functor` requirement but the more abstractions we pile up – the more esoteric it becomes.

Enough for today. I hope it was useful. At the very least now you can explain why it's possible to both map and filter with `reduce` and `flatMap`.