What is functional programming and what are its origins? What are its main principles?
- [Narrator] In this video, we're going to define what is functional programming? In particular, we're going to talk about the origins of functional programming, the main principles of this programming style, and one of these principles is not to use traditional loops. So we're going to see an example of how to program without traditional loops. The history of functional programming can be traced back to Lambda-Calculus.
This is mathematical language invented in the 1930's by Alonzo Church, who was a mathematician and logician, and is considered one of the founders of Computer Science. In some sense, Lambda-Calculus is the first programming language. On the other hand, the syntax of Lambda-Calculus is very different and far away from current programming languages. For instance, consider the simplest possible expression, the successor function X + 1.
Now to express this very simple function in Lambda-Calculus, you would have to write this very strange expression which reads Lambda WYX.Y of WYX. So we're not going to explain the meaning of this. Just to make the point that Lambda-Calculus is a very abstract mathematical language. On the other hand, some principles of Lambda-Calculus can be found in modern functional languages, and even in Java.
So the first principle is that the main object in this kind of languages is functions, and functions in the mathematical sense of these words. So something that takes some input, returns some output, and it always returns the same output if based on the same input. So this property goes by the name of the function being stateless. In other words, the function has no memory of previous inputs, or no memory of anything that happens in the past.
This property's so important that it's been given different names, such as purity, so you can talk about pure functions. It's also been called absence of side effects, and referential transparency. So obviously, you can find people who would discuss at length about subtle differences between these three or four definitions, but for our purposes, they can be considered synonyms. They can be taken all to mean absence of memory.
So having the same output for the same inputs. A consequence of this property is that there is no variables, because variables are used to preserve memory of past events. Therefore, there's also no assignments and no traditional loops, because loops are based on some variable repeatedly changing its value. Now, how can we program with no loops? Let's see a simple example.
Okay, consider these two Java fragments here, these two methods. So on the left-hand side, there is a method called array sum. It takes an array of integers as parameter, and very simply sums all values in this array and returns the sum. Okay? This is what every Java programmer would write in a second. How can we do the same thing with no loops? Well, we can use recursion.
Recursion means that the function is going, or the method is going to call itself with slightly different parameters. So check out the method on the right-hand side, array sum two. To achieve the same effect, we're going to need an extra parameter called start. Array sum two will return the sum of the elements of the array who's index is greater than or equal to start. To do so, it's just based on a simple if then else.
If start is too large, it's beyond the length of the array, we'll return zero. Otherwise, it will return the sum of the first element, so the element will index start of the array, plus the result of calling the same method with an increased value of start. So if you post and think about it for a second, you will understand that this actually obtains the same effect as the more natural method on the left-hand side, and it does so without any variables.
So as you can see, there's no variables here. There's just these two parameters, array and start, and we do not modify them. To stress this, I added the final key word to these parameters. On the other hand, just because you can replace iteration with the recursion, it doesn't mean that you should do it. In particular, you should not do it in Java. That's because as most imperative languages, Java is not optimized for recursion, and this means that the example on the left-hand side is going to only require constant space to perform the sum, and this constant space is taken by the two variables, sum and I.
Obviously, I'm talking about constant additional space, besides the size of the array itself. On the other hand, the method on the right-hand side, the recursive one, requires linear space, linear extra space. In other words, an amount of space, an amount of memory which is proportional to the size of the input array. This amount of memory is used by the recursion, because every recursive call to the method will take some space from the stack.
This means that if the input array is very large, the iterative versions on the left-hand side will work perfectly fine, whereas, the recursive version on the right-hand side might run into a stock overflow issue. Back to our slides. Let's see some modern functional languages derived originally from Lambda-Calculus. So the first of them was Lisp invented in 1958.
It was actually the second programming language ever after Fortran, and this language has some modern derivative languages which are Scheme and Clojure. In particular, Clojure, which is about 10 years old. It runs on the job of virtual machine. Another type of functional language is Haskell, and another big family of functional languages is called ML for MetalLanguage. This branch was born around 1973, and one of the most recent languages in this family is F#, which is part of the Microsoft.net frame work, and it's a multi-paradigm language, meaning it's not just functional, but it has regular imperative features as well.
So why do we care so much about functional programming? Well, because it has some very good benefits related to parallelism. So related to parallel execution of code. These benefits mostly come from the fact that functional programs are based on composition of functions instead of communication, such as communication within different threads. In a functional language, the whole program is a composition of different functions, such as F of G of something, H of something else, and perhaps, again G of something else.
Now if G and H are stateless, also known as pure functions, then they can be evaluated in any order, and the result will still be the same. In particular, they can be evaluated in parallel at the same time. Hence, we're not going to have race conditions. We don't need mutual exclusion, synchronization, et cetera. Much simpler than multi-threading. That's all for this video.
Let's move on to the next video where we talk about new features of Java interfaces.
Note: To get the best results from this course, you should be familiar with basic Java programming concepts, including lists, maps, and sets, and have a suitable IDE, such as Eclipse, NetBeans, or IntelliJ.
This course was created and produced by Packt Publishing. We are honored to host this training in our library.
- What is functional programming?
- What are functional interfaces?
- Writing lambda expressions
- Creating functional interfaces
- Composing functions
- Sequential data processing with streams
- Using parallel streams