Cloud native applications require an orchestrator to run and operate the individual microservices. Learn how to deploy and run a Go microservice with Kubernetes.
- [Instructor] Hi, welcome back to Advanced Cloud Native Go. My name's Leandre Rymer, and I'm happy to be your host. Microservice Orchestration with Kubernetes. So in this video, we're going to take a look at how we can orchestrate the previously implemented Advanced Go Microservice using Kubernetes. So in the first step, we're going to describe and create a so-called Kubernetes deployment; in the second step, we're going to describe and create a Kubernetes service; and in the final step, we're then going to deploy and run the deployment and the service within our locally running Kubernetes.
Before we start coding, let me quickly recap some of the important Kubernetes concepts you need to know. So right in the center is the so-called pod. The pod is the smallest deployable unit computation within Kubernetes. The pod contains our containers, and the pod can be described using labels. The next thing you need to know is the so-called deployment. It allows for the clarity of updates of pods. The service is an abstraction for a logical collection of pods.
The service also is declassible within the Kubernetes cluster using a DNS name. And also important is the so-called ingress. It allows the external access to defined services from the outside world into a Kubernetes cluster. So let's get going and implement these concepts within Kubernetes. So let's open our IDE. So we see here this is a basic YAML file, and the kind is deployment, and I've already prepared a few things. So here, for example, you see replicas two, so this is basically the minimum amount.
So let's define the containers of this deployment. So here now you define the containers of the pods. So you give the container a name and you specify the image. And remember from the previous section, we used Docker Compose to build this docker image, so gin-web, version 101. We defined container ports, and in this case, we say port 9090, and we passed in the port environment variable, and in this case, we set it to 9090 as well.
So this defines the one container. What you can also do is you can define resources, for example, like this. So we can define the resources the container usually requests and you can define some limits. So if you hit those limits, Kubernetes will automatically restart your greedy containers. You can also define so-called readiness and liveness probes. So here I define the readiness probe, type HTTP get, I call the ping endpoint of our Microservice on port 9090.
So the readiness probe determines if our Microservice already receives traffic from Kubernetes. And the liveness probe is similar, only that called regularly by Kubernetes to check if our pod is healthy or not, and if it's unhealthy, then the pod will be recreated automatically by Kubernetes. So this is our deployment done. So let's go for service. Again, it's a simple YAML file, kind service.
You see that here. So the service is used to access our pods within the cluster. So if I specify type node port, I say port 9090, and I say selector app gin-web. So this basically selects all ports that have app gin-web as a label. So if you go back to the deployment and you see here that in the meta-data of the deployment there is the label app gin-web.
So this service will map, match any pods created by this deployment. So let's go to our console. Like I told you before, we're using Minikube for local development. So I'm using Minikube version 0.19 at the moment. Minikube IP, so this is the local IP Kubernetes is running on my machine.
So if I say Kube control plus the info, you see that the Kubernetes master is running at this IP. So what I can do now is Minikube dashboard. So this will open up my default browser with the dashboard. So this is the Kubernetes dashboard. You see here the deployments and the workloads, and nothing's deployed yet.
So what I can do now is say Kube control apply minus F, and I give it the directory where we created the YAML files before. So you see we created a deployment, an ingress, and a service. So Kube control, get deployments.
So you see, because I specified three replicas, I have gin-web con three. So if I say Kube control, get pods, you see that I have three running pods. Using the Kube control logs command, I can display the logs of any of these pods.
So I see here again, this is the console output of our Advanced Go Microservice. And you see here that the ping endpoint is called regularly. And this is because Kubernetes pings our Microservice regularly to check if it's alive and healthy. So if we say Kube control get services, you see that I have this gin-web service here on port 9090 and this node port 32225.
So let's access that one. So I can say Minikube service gin-web. So what this does now is access this Kubernetes service from within our browser. So we see here I access the index. As usual, what I can do is, well, access the Microservice as usual.
Hello. We could access the ping endpoint. API slash books. So you can also see this here on the web console. So we see here I have a deployment called gin-web, a so-called replica set, you have the three ports running. You can have a look at the pods and describe.
You can also have a look at the logs. So this is the web console. As we remember, you can also use the Kube control, scale command for example, scale deployment gin-web, minus minus replicas equals eight. So this will now scale our deployment to eight replicas.
See here, those were the three running before, and Kubernetes has now autocreated five additional pods, and those should become ready in just a few seconds. And off you go, we scaled our Microservice to eight replicas. So that was about it, running our Advanced Go Microservice within Kubernetes locally, and I hope you enjoyed this video. So in the next section, we're going to look at service discovery and configuration, both important concepts in any Cloud Native application.
So I hope to see you there, bye-bye.
This course was created and produced by Packt Publishing. We are honored to host this training in our library.
- Cloud native application platforms
- Go frameworks and libraries for microservices
- Using Docker for containerization
- Using Kubernetes for orchestration
- Using Consul for microservice discovery and configuration
- Registration and lookup
- Implementing service discovery using Kubernetes
- Microservice communication patterns: Sync and async
- Using circuit breakers for resilient communication
- Implementing message queuing with Rabbitmq
- Using Apache Kafka for publish/subscribe