Learn about the benefits of serverless architectures, as well as some limitations.
- [Instructor] In Traditional Application Architectures there's a lot that goes into deploying and operating a service. You need to provision and configure machines, manage uptime, and install patches and security updates. It's much easier with a serverless app. As the name indicates, serverless allows you to build and run application and services without thinking about servers. It's also known as FaaS, F.A.A.S. for functions as a service. Essentially, you write your core business logic, upload it to your cloud provider, and they handle the rest. There are several benefits to serverless architectures: they're pay as you go, you only pay when your code is running, typically metered on the order of hundreds of milliseconds. This can end up saving substantial amounts of money for many applications. They're scalable. The cloud provider will seamlessly handle scaling your code from one execution to thousands of simultaneous executions and then back down again, depending on demand. They bring a lot of productivity and leverage. It's hard to overstate this, but allowing developers to focus on their core value, than leave the undifferentiated heavy-lifting is a huge boost to productivity. Security, we no longer have to worry about patching hosts or locking down configurations. Most cloud providers have some version of serverless or cloud functions. For this course, we'll focus on AWS. Their serverless product is called AWS Lambda. AWS often tells their costumers those serverless were possible. They've seen tremendous success with their customers deploying serverless architectures. So when is serverless not possible? Well, to enable the benefits we've discussed, there are some trade-offs. The first thing, that our functions themselves, need to be treated as transient. Since they may be scaled up or down, depending on load, or for operational reasons, we can't count on keeping any local state. We can, of course, keep state in external services like a database, the key-value store, a memcached server, et cetera. And this is departure from legacy systems that keep state on local disk, or local memory, well I'll happily argue that at least they're better designed. Even with traditional data centers, counting on any one, or seven machines, always bing up is unrealistic. Failure is always an option, by making it the new normal, you have a more resilient system. Next, most serverless systems have limitation in terms of maximum function in time. The time that it's recording, in early 2019, is as 15 minutes for it to be less. Since our code is not always running, we can't have long running state, or perform long running tasks. There's also limit on the maximum amount of memory we can use. We can pay more to increase to a point, but there's a limit. There are also other limitations on the maximum payload size, the maximum deployment size, how much local disk we can use, et cetera. If we can't fit within these constraints, it may be a problem.
- Serverless components for REST services
- Creating your first Chalice app
- Routing requests
- Customizing responses
- Implementing basic authentication
- Integrating Cognito
- Setting up custom policies
- Splitting up an app
- Writing and running tests
- Creating a CD pipeline with CodePipeline