In previous videos, you learn a lot about pure functions and why they are so valuable in functional programming. Now, take a look at a type of pure function called a combinator. Combinators encapsulate common patterns, allowing you to perform much more complex and interesting operations inside compositions by combining logic.
- [Instructor] Hello. In the previous section, we learned about composition. Composing functions together allows us to write point-free functions, those which don't have explicit arguments defined. In this section, we'll be continuing our use of composition. However, we won't be able to use only the compose utility that we have to build applications. Most programs aren't that simple. To start off this section, we're going to look at the definition of combinators, what they are, and what they're useful for.
From there we'll look at how combinators are able to assist with inspecting values that are flowing through our composition pipelines. Composition is simple output to input, but life isn't so simple. We'll see combinators that can help us to join outputs or to fork inputs. And lastly, we'll check out control flow using combinators for situations where input might be passed to certain functions that depend on some other conditions, so if/else-type logic but within composition.
Let's get started looking at combinators, what they are, how they can benefit us. First we'll look at what combinators are. We'll hear the generic definition, which is very apt and makes perfect sense. And it'll make more sense once we've worked on a few, so don't worry if the definition sounds strange at first. And then we'll quickly go over why combinators are different from other run-of-the-mill functions that we've been using.
The simplicity of most combinators makes them appear less flexible than they actually are. In fact, combinators can express about anything we could ever hope to express, which isn't to say that we want to refactor everything using combinators, but we should figure out why they are important in our functional journey and what we can learn from them, what we can take away from them. And then once we know what combinators are, what and why they're useful, then we'll have a good idea when they should be used.
We've seen combinators before, too. Actually, we've seen a combinator very, very, very recently. I won't ruin the surprise, or the suspense. You'll find out what it is soon enough. So first, here's that definition I promised you of a combinator. A combinator is a pure function that doesn't have any free variables. That's all; that's the definition. It's very precise. And like many useful combinators, very short.
The term free variables simply refers to variables within the context, context meaning scope, of a function that aren't explicitly passed in as arguments. So in this example, we're writing a function called freeVars. This function takes a single argument, a, and then concatenates the variable b in a string with a as the function's output.
The variable b is a free variable. Think of free either by thinking that the variable b is coming free with our function. Or we could think free as it's just dangling there on the wind. Any code which calls the freeVars function has no way to modify or access that b variable. So freeVars is not a combinator. How about if we used a variable c in our freeVars function? Is that a free variable? The value of c is not defined in the context of the freeVars function.
It can be touched from the outside of freeVars, but the identifier, the lexical token c, is included in the body of our freeVars function. Even if we wanted to sit here and argue over the free status of the c variable, it would be a moot point as the freeVars function is no longer pure now. So it's referencing a value outside of its scope by referencing the c variable, so now we have an impure function. So freeVars couldn't meet our definition of a combinator anyways.
And that definition, again, was a pure function with no free variables. Remember the compose function? This seems like a good candidate for a combinator. We see that compose has a bunch of variables inside of its body. It has x, f, and g, three variables, and none of them are defined inside of the compose function. F and g are passed in as the composed functions, so composer turns a new function from f and g.
Even the x variable will be defined outside of the compose function and then passed in to the composed functions. We pass x in from wherever we use the return function, so wherever we use the composed function. Let's extend our definition to say that combinators are pure functions. They have no free variables. All variables inside combinators are passed through arguments.
We're defining a variable, rv, as well as the iterator variable i. And also we're making many assignments to these variables. Map should be able to be a combinator, though. Some very smart people have said so. So let's see if we can prove them right, at least by our working definition. Maybe we can't write map using imperative loops without free variables, but we can try it another way.
We could try to implement map using recursive functions, or making map a recursive function. And we covered recursion very briefly in the first volume, but we didn't go too deep into it. Recursive functions are functions that call themselves. So we could write a map function without any free variables by using recursion. So a map function that calls itself. Our recursive map function could take a function and a list just like any other map function would, standard stuff.
We have the same argument signature. The first thing the map function would do would be to check if the list that was passed in has any items in it. So if there's items, then we want to map over them. We want to take the first one out of the list and apply our function over it, so our mapping function, whatever it may be. That will map the first item of list into the first index of the final returned array.
The rest of the array is another call to our map function, this is what makes it recursive, with the remaining items on the list. So that's what we put into it. So it's our list minus the item we just mapped over. That's why we use the dot, dot, dot rest notation to grab up the rest of the list into y. And we use dot, dot, dot spread to spread the final results that come back from calling map again with that same function but with a smaller list.
So each call to map passes in a list that's smaller by one element. This is how we know that eventually we'll call it with an empty array, and that'll wrap up the recursion. If we pass map, a list that's empty, the function doesn't have any items to transform. So it's complete. And we can just return that empty list, or we can return a new empty list. To make this more clear, we can call map over a function kaplow and list with values three, four, and five.
This call will check that list has length. And if it does, it will return an array with initial value as the result of applying kaplow over three and then spread out the result of calling map applied to kaplow and the rest of list, four and five. So what's the result of that call? Well, let's see. It's an array with the first value being kaplow applied to four, and the rest of the array elements are the values from that array that contain the result of calling map with kaplow.
And then the remaining list items, which is only the number five. So the result of calling that will be an array with the first value being the function kaplow applied to five and spreading the results from a call to map over kaplow and the remaining list items, which are none. So it's just an empty list. Calling map with kaplow and an empty list will return another empty list, and now the recursion will begin to wrap up.
This map call returns and so the previous call to map can return, and then the call prior to that can return. And now here is the final result. All those spread operators are the same as concatenating arrays. They flatten out. So in the end we have one array with the results of calling kaplow over all the items in the list. It's just map. But now it fits our definition of a combinator a little better, although we're declaring variables.
Do those count as free variables? And if so, we could change that variable declaration into plain old array access for the call to fn and use slice to get our list with one less item for the recursive map. So it almost doesn't matter. Regardless if we make the free variables or not, we are performing the same actions while getting the same result. We could create combinators to get the first list item and the tail of our list.
Those are undisputable combinators. They have no free variables. They do no real work besides returning the remaining items of an array for tail and the first item of an array for head. Both of these functions are available in Ramda, by the way, tail and head. If we have these two combinators, then we might as well use 'em in our map function, right? Only now, map technically isn't a combinator anymore.
At least, our implementation isn't a strict combinator anymore because it now has dependencies that are not passed into it as arguments. So is map a combinator or not? Do we even care if map is a combinator or not? And who holds the master list of combinators? I never really think of map as a combinator, because 90% of the implementations of map that I use aren't combinators.
They all either define variables in their bodies or use external utilities. Obviously, map can be a combinator, though. We just defined a version of map that met our definition of a combinator, that definition being, a combinator is a pure function that doesn't have any free variables. Free variables are simply variables in the context of a function that aren't passed in. And our amendment is that all dependencies are passed by a parameter. If map isn't a combinator, does it make a difference? No, of course not.
We don't care as long as it does its job and it works well. The thing about combinators that makes them so great is that they fit super well with everything we've been learning about and everything that's great about functional programming. They're predictable. So they're always gonna do the same job. They're unbreakable. They have no dependencies, so there's nothing to break. So they can be as simple or complex as we want. And no matter what, since all the arguments are passed in, there's no dependencies, we can move it to another program and it's not gonna break.
But if we wrote a library that implemented map, and we also implemented a tail and a head function, would we be horrible people for using them in our map function? I don't think so. It keeps the code dry. And if these things are in a library that's packaged together, then they're always gonna be available. So we sort of need to use common sense there. Combinators are a means to an end in a lot of ways. We'll see in the next few videos that some simple combinators, a lot simpler than map, actually, can do a lot of good things for us.
- Basic techniques for functional programming
- Working with composition as a solution for your tasks
- Using the laws of compositionality
- Using combinators
- Encapsulating I/O using generic containers
- Lazy evaluation using generic containers
- Building a functional data store like Redux
- Creating a history of changed state to rewind or fast-forward an app
- Linking events in an app to actions on the data store