Kubernetes provides several high level abstractions to manage and access containerized applications: deployments and services. The aim of this video is to demonstrate how these abstractions can be declared and used in Kubernetes to run a small Go microservice.
- [Instructor] So this video is all about implementing the deployment and service descriptors so we can run our Go microservice within Kubernetes. So in this video we're going to take a look at how to write simple YAML descriptors for a deployment. I'm going to show you how to can assign CPU and RAM resources to a container. I'm going to show you how you write a simple YAML descriptor for Kubernetes service. We're going to add liveness and readiness probes to our container, and then finally, we're going to connect to our service through a node port.
So this is a lot of ground to cover, so we better get started. So as usual, we need a Consul in an integrated development environment. So let's create a YAML file for our deployment. Now this is what the header looks like, OK? So you need to make sure that the API version currently is extensions/v1beta1 and the kind is deployment. So this is the basic header. The next is the spec, OK? So this specifies what the deployment is.
So, for example, we always want the two replicas of our port a later run. So next, going to specify the template of this. We specify some metadata, you specify some labels, maybe app and tier. Remember, labels are arbitrary key value pairs you can assign, and in here, we have another spec that specifies the containers we want to run.
So name, cloud-native-go, as usual, and to specify the image we want to use, we need to specify ports and this case, container port 8080, and if you like, we're going to specify an environment variable like this. So, this is the first deployment YAML done. So let's go back here. So this is our working directory. Kubcontrol get deployments, parts, and replica sets.
If I issue this, I shouldn't find anything, because we have an empty cluster, right? No resources found. So I used a Kubcontrol create command and pass it as a file, the YAML file that we just created, OK? So let's see what Kubernetes cluster. So we issue the get command again for deployments, parts, and replica sets, and what you see now here is interesting, OK? So first of all, we have a deployment created.
You see here that I already have two parts running, because I said I always wanted to have two replicas running and down here, you have the replica set, that basically manages that all those parts are running. So with this simple deployment, I managed to create parts, the deployment, and a replica set, OK? Right. Now, let's change this a little and add some resources to our spec, OK? So you see this here.
Specify the resources and there are two basic resources I can specify, requests and limits. Requests are the memory and CPU resources that well, basically Kubernetes users to schedule our ports to the nodes, OK? So if you request more than the nodes offer, your ports will never be scheduled. And limits, here, well, basically tells Kubernetes, well, once those limits are exceeded it will start killing your ports and restarting them because it assumes they are misbehaving.
So be careful not to specify your limits too low. Otherwise, Kubernetes will constantly will kill and restart your ports, all right? So we save that, we go back here, and we say, Kubcontrol apply -f. Now we apply all the changes we just did. So you see here the deployment has been configured. So what we could do is Kubcontrol describe deployment.
Describe the deployment cloud native go. This is what's done to it, OK? So we specified some resource constraints for our ports. So next up, services. So remember, we need services to interact with our ports, because the ports come and go, the service always stays.
Now this is what a service looks like. So we have API version, v1. Kind, service. Again, we specify some metadata, give it a name, and a few labels. Now here, the interesting part is the spec of the service. You give it a type, NodePort. You come back to this one a little later. We tell it the port the service should listen on. So in this case, we do 8080, and we specify a selector, and basically, what you need to do here, is the selector should match the labels of our ports, OK? So go ahead back here.
You see here that those are the labels of the ports and they have app cloud native go and in the service, the selecting goes, OK? So let's save that one. Again we use Kubcontrol create -f and the YAML file we just created. So see the service has been created. And if we display the service, we see here the service we just created.
And you see here is port 8080 and there is an additional port and this is the type, NodePort we specified. So basically we told Kubernetes to open an additional dynamic port on the node, which we can now use to access the service, OK? So let's see if that works. So we open the browser, and use that port, and hey. Now we already accessed our ports running behind this service, OK? And the service takes care of load balancing, all incoming requests and forwarding them to the ports that are running.
So, one final thing. How does Kubernetes know that our ports are healthy and how does the service know it can forward requests to the ports? Now if we go back to our deployment. Let's add a little more to our deployment here. Now there are two types of probes you need to know. First of all, it's the readiness probe, right? So the readiness probe is basically, in our case, an HTTP-GET request.
Kubernetes will issue against our ports and only ports that respond with status code 200 will be considered ready and will be given traffic by the service. The second type is the so-called liveness probe. So again, here, the liveness probe is a GET-request, which is issued against those ports, and if Kubernetes finds that another status code, well, HTTP 200 will be returned, our port will be considered unhealthy and restarted by Kubernetes automatically, OK? So this is the means, how you can have, well, high availability of your ports, so if something goes unhealthy, it will be killed and restarted by Kubernetes automatically.
So again, we performed apply command with our modified Kubernetes deployment, OK? So the deployment has been configured and we're up and running again. So we issued a describe command again, we'll get the ports. So there's one being terminated and here we have two running.
So, OK, that was it for this video, and in the next video, I'm going to show you how you can scale those deployments horizontally and how you can perform rolling updates on your deployments. I hope to see you then. Bye-bye.
This course was created and produced by Packt Publishing. We are honored to host this training in our library.
- Implementing Go HTTP Server
- JSON marshalling and unmarshalling of Go structs
- Implementing a simple REST API
- Using Docker workflows and commands
- Building a naïve Docker image
- Running a containerized Go microservice
- Kubernetes architecture and concepts
- Deploying a Go microservice to Kubernetes
- Implementing service descriptors
- Performing rolling updates