Learn how to work with AWS Lambda. Explore configuration parameters for Lambda, such as memory size using CloudWatch logs and X-Ray traces and service maps.
- The next serverless service we're going to look at is for compute, and it's Amazon Lambda. The concept here is that you have the ability to run a function and you're only billed when that function is called. So you can think of it as a method or even a verb. This has really revolutionized the industry, and created an entire set of design called serverless. Point of fact, I think S3 file storage was the first serverless service. I tend to have a data perspective rather than a computer perspective because of my background, but the two together are what we typically see.
So easiest to actually see how this works, so I'm going to create a function. I'm going to select a runtime and I'm going to select Python 3.6, and I'm going to select a template for helloworld. I'm going to call this hello-python. I'm going to need an IAM role so I'm going to create a role from the template, and I'm going to call this python-lambda-demo.
Then I can choose a policy template. Lambdas are little units of compute that have integrations into other Amazon services. This is similar to what we saw with Kinesis Firehose, where it has integrations into data storage such as S3. You can see that you can choose a template here if you want to work with another service, and let's just actually pick S3 since we've been looking at that. Now we're going to look at the Lambda function code. And you can see, really it doesn't matter the language.
So if you're not familiar with Python, it's not really a Python course. The idea is here is the entry point, the handler, and we have an event that occurs, and then we have a context, so an invocation. We're going to print out some values. If Lambda can't fire, then we're going to get an exception message. We're going to scroll down and click Create function. Now we have our function. We are not dealing with a virtual machine, although there is one, Amazon's managing it for us.
We're not dealing with a docker container. There is one, Amazon's managing it for us. We're simply dealing with function invocation. In order to test this, we're going to configure a test event. We'll call this HelloPythonTest. We'll just put in a value so we know it's real. I'm at Linked In Learning today, and it is hot outside. We'll click Create, and then we'll click Test.
We can see that it succeeded. In fact, I'm going to click it a few times. Every time I click this, my account is being charged. Now my account is being charged a very minuscule amount. In fact, I've had customers who have run on the free tier of Lambda, they've run production applications with thousands of users, even tens of thousands of users. The reason is, Amazon is time-slicing its own infrastructure. So they're getting the economies of scale, and they're passing along the cost savings to you.
In terms of availability and scalability, this is also a wonderful solution, and it's really taking the industry by storm, because there are no servers to manage. You simply run your code, and when you need more invocations Amazon handles all the underlying infrastructure. So there are a couple of things that you do need to be aware of in terms of setup of Lambda. Let's look at Triggers. You can see that we don't have any triggers associated with this.
We're just invoking the function manually. You typically will use this as what we like to call glue code between services and when events occur. So, what kind of events? Well, a simple example is dropping a file into an S3 bucket, and then some sort of message or notification goes out along your data stream. Let's actually add a trigger, and we'll add S3. You can see there's tons of integrations. We want to set bucket as the event source.
So let's set our demo simple, and then we're just going to say, when we have an object removed, when we delete something, and then we can filter that down with prefixes or suffixes, and here we're setting up permissions automatically. We're going to click Submit. Okay, so here we go. This is out of our demo simple, so let's remove something. Let's go to S3, and demo simple.
Oh, and we have to put something in there, so let's put something in there. We'll add this file, click Next, wait for it to upload. Now we'll select it and delete it. Now we'll switch back to our Lambda, go to the Monitoring tab, and click Refresh.
Now we'll see that we've had eight invocations. So as with the other serverless services that we've been looking at, that are charged by the invocation or usage, Lambda's the same. You're actually billed by the size of the Lambda, which we'll look at in a minute, and how many times it was invoked, how long it ran, and then we have just some other information here for scaling: throttling, iterator age, so on and so forth. There's a couple of different ways to monitor it. The first way is in CloudWatch.
If I open this up, this is our log. You can see there's our invocations. Although this is useful, and if you need this level of detail, there are other ways to monitor that provide great information for scaling. Let's go back to the Lambda console and let's go to Configuration. In Configuration, if we scroll down we see our source code. Then we see environmental variables, where we can pass in values. We see Tags, which we have one tag that was auto-populated by Amazon, and then we've been using this Tag throughout this course, to tag all of our resources as a best practice.
We then have the security context, the execution role. We have the network, and by default Lambdas run outside of a virtual private cloud, although they can be associated if you have a security need for that. It's very important when you're designing Lambda, to understand how basic settings impact performance, scalability and cost. By default a Lambda is associated with 128 Mbs of memory, and a three second timeout.
If you were to drag this slider, or to rescale your function to allocate more memory, then the per invocation price would go up. As with some of the other serverless services, it's important to do some load testing to understand how to properly scale your Lambda. Now to that end, we have this section down here, Debugging and error handling. There's a new capability, called a dead letter queue, which helps you to understand what happens on failure, and another new capability called active tracing.
If I turn this on, then I will have to set permissions for a new service called X-Ray. X-Ray allows you to see the invocation path across multiple Lambdas. So I'll have to adjust the permissions, and then I can see the execution path for this Lambda. Then if I had Lambdas that were connected to one another as a chain, I could see the execution path across the X-Ray service. Now I'm going to open IAM, find this role, add these permissions, and then click Save to enable tracing.
I'm opening up IAM. I'm selecting roles. I'm selecting the python-lambda-demo. And I'm attaching a Policy for the X-Ray tracing service. For ease of demo, I'm just going to give X-Ray full access, and click Attach policy.
Now I'm going to return to the Lambda console, scroll up, click Save. Now I'm going to click Test a few times. I'm also going to go over to my bucket, upload that same file, and then delete it because that's the trigger that we've set up.
Go back to the Lambda console, go to Monitoring, refresh, you can see now we've had over 15 invocations. View the traces in X-Ray. In X-Ray, we have a list of all of our traces. I'll just start with the top one. Here we can get detailed information about the invocation time, which can really help us with properly sizing our Lambdas for scaling.
The last thing I want to show you about X-Ray is the service map. You can see that I have some throttling going on. If I pull up the Map legend, I can actually focus in on those errors, and I can look at the service details. Tools like X-Ray are really critical in helping you understand how to properly configure serverless services like Lambda for best design patterns.
This course is also an exam preparation resource, as it covers topics that map to the AWS Certified Solutions Architect – Associate exam.
- AWS design concepts
- Serverless services
- Server-based services
- Code tools for implementation
- Design trade-offs for AWS applications