Join Arthur Ulfeldt for an in-depth discussion in this video Configure service autoscaling, part of Deploying Docker to AWS (2017).
- [Instructor] Now we're going to add some automatic scaling to the service we've been working on. Let's start by creating the cloud watch alarms that will drive these events. Go to services, find cloud watch, Okay, alarms, create alarm. And for here, we're going to cluster name, service name and find date service with ALB memory utilization.
Check that. Next, I'm going to call this date service high memory. Okay. And we would like this to go off whenever the memory use is above 80% for one consecutive period. Under actions, I don't actually need any actions to be taken by the alarm itself. All the actions will be taken by the service in response to the alarm.
So, let's go ahead and create that one. And while we're here, we're going to create the same alarm for low-memory usage. Memory utilization date service D. Click next. We call this date service low memory, and again, I don't need any actions to be taken by the alarm itself, and create the alarm.
So now I have an alarm that will go off when memory use gets too high, and a separate alarm that will go off when memory use gets too low. So now, let's configure our service to respond to these. Okay. Back to the EC2 container service. Choose my cluster, date service with ALB, and let's update this service. Okay. Configure service auto scaling, enable auto scaling.
And here we need to choose some limits. We're going to choose the minimum numbers one. It should never scale all the way down to nothing. We'd like it to start off at one. That's what the desired number is. And then, let's say at a maximum of five. We don't want to pay for more than five regardless of what the load is. And I'll use the default roll. Now, let's add some policies to respond to our alarms. Policy name, high-memory.
I'm going to use d-service high-memory alarm, to add one task when the service is over 80%. The cool-down period here is very important. This is a good time to prevent the system from auto scaling in response to its own scaling changes and prevent resonances. So, in general, use a good long cool-down period between actions to prevent unwanted scaling changes. Okay.
We're going to add another scaling policy for the low memory, and we'll choose the low-memory alarm. In this case, we're going to want to remove one instance when the memory use is too low again with the cool-down period. Okay. Now let's save. We can see our new policy appear under the auto scaling section. So, let's click update service to save our changes.
Alright. Now that that's been applied, our service will scale according to it's load.
Released
8/9/2017- Adding Docker to EC2 instances
- Creating ECS instances and clusters
- Building tasks
- Creating tasks through the CLI
- Creating a service from a task
- Autoscaling services
- Deploying an ECS CloudFormation stack
Share this video
Embed this video
Video: Configure service autoscaling