- [Douglas] It's time now for Asynchronicity. So there are two kinds of functions in the world. There are Synchronous functions, and Asynchronous functions. So let's first look at Synchronous functions. A Synchronous function is a function that does not return until the work is completed or has failed. So all of the functions that we wrote over the last couple of days have been Synchronous functions because that's how they work, and that's a very useful thing to have in a function because it means it's easy to reason about its behavior over time.
Now when a function, when a Synchronous function calls another Synchronous function the caller is suspended in time and nothing advances until the callee returns. If the caller is looking at a clock at the moment that they make the call their experience will be that the hands jump forward quickly, but otherwise they are not aware that this stuff has happened except that the thing that they asked for has magically been completed. And that makes it easy for us to reason about things unless we need to make multiple things happen at the same time.
You can't make multiple things happen at the same time if you are suspended in time. So the was that's often mitigated is by use of threads. A thread allows to have multiple threads of execution happening through a memory space at the same time, so that lots of things can happen at the same time. Unfortunately, races come with some problems including, or threads come with problems including races, deadlocks and other reliability problems and performance problems, and we'll look more at these. So the threading model, like all models, comes with pros and cons, and the first pro is a really important, really significant one, no rethinking is necessary.
You can take any existing piece of code, put it in a thread and it'll just work, that way you don't have to make any changes to it in order to introduce it to a threaded environment. Now that doesn't necessarily mean that you'll never need to change it but that starting up phase is really easy. The next pro is that blocking programs are okay. It's okay for programs to block, and in fact that is what threads are for. Threads exist so that things can stop and have other things happening while its stopped.
So execution continues as long as any thread is not blocked. But there's some cons. The first con is that there is stack memory allocated per thread. This used to be a significant problem. It's not anymore, Moore's Law continue to ramp on memory capacity, and so thread stacks are now in the noise, we don't care about them anymore. A more important con is that if two threads use the same memory, at the same time, a race may occur, and in fact this is a big problem.
But that's not the race I'm worried about, the race I'm worried about is this one. Another possible outcome of this program is an array containing only 'a' or an array containing only 'b', and this is not due to anything except what's happening in these two statements, and you can look at these for a long time and try to figure out where did half of my data go. You know, how did this fail, it's really hard to reason about programs that race in threads. So let's zoom in on this and look at what's going on. So that's the first statement.
So one possible ordering could be both capture the length at the same time, and both will be using that to store into the array and both will be using it to change the length of the array, and as the result, whichever one runs second is probably going to win. So that's where half of our data went, and this stuff is really hard to reason about. For one thing it's, you can't see it in the original code, but it's worse than this because each of these statements could expand into multiple machine language statements, and you don't know how those are going to interweave and each of those could expand at the low level into micro instructions and you can't control how those will interweave, and it may be that the real time behavior of this threaded code changes occurring to loading.
So it might be working fine during development but then fail when we put it in production. Or it's working fine most of the year, but it fails at Christmas, you know as things change, things that wouldn't appear to affect the behavior of the program can actually radically affect the behavior of the program. So the way we mitigate this stuff, or so it's impossible to have application integrity when we are subject to race conditions. So we mitigate that with Mutual exclusion. Mutual exclusion allows only one thread to be running in a critical section of memory at a time, and there's a long history of this stuff starting with Dijkstra Semaphores, and hores monitors, the Ada Rendezvous and now we call it synchronization, but these are all transforms on the same idea.
This used to be operating system stuff. It used to be only operating systems were concerned with Mutual Exclusion and running multiple things at the same time, but this has all leaked into applications because of networking and because of the multi-core problem. The concern with networking is that we want to be able to have stuff happening while slow things are happening, and one of the slowest things we can do is go out to the network cause that has large latencies and in some cases unknowable latencies and so you can be stopped for a long time and generally you can't afford to have systems be suspended for that long, so you need threads in order to allow them to continue.
Then there's the multi-core problem. CPU designers have lost the ability to make CPU's go faster, so instead they are giving us more of them, and we don't know how to use them. Unless, if you have a problem that is embarrassingly parallel, then we can take advantage of it, but most of what we do is embarrassingly serial, and we don't know how to take multiple cores and put them in our applications and get a significant benefit from it. So, so that's where we are, so with Mutual Exclusion only one thread can be executing in a critical section at a time, and all other threads waiting to execute in the critical section will be blocked.
If the threads don't interact, then the programs can run at full speed across all the cores, but if they do interact then races will occur unless mutual exclusion is employed. Unfortunately, mutual exclusion comes with its own dark side, and that is deadlock. Here we have two threads, Alphonse and Gaston. They are both programmed to wait for the other to stand up before they can stand up, this is deadlock. It turns out computer systems do this all the time. Here's another example, this is a real world example from Sao Paulo.
You can think of, apparently they do this all the time. (audience laughing) You can think of each of these cars as being a thread which is ready to run, it's just waiting for the thread that's blocking it to get out of the way. So, deadlock that's a serious thing.
This course was created by Frontend Masters. It was originally released on 6/20/2016. We're pleased to host this training in our library.
- Writing code for performance
- Script tags
- Nodes and events
- ES5 and ES6
- Principles of security
- Object capabilities
- Synchronous functions
- Asynchronous functions