Join Lynn Langit for an in-depth discussion in this video Use Container Engine: GKE, part of Google Cloud Platform Essential Training (2017).
- [Instructor] So we just looked at Google's virtual machines, and they're great. And I use them frequently. However, there's a new game in town. And that's application virtualization, containers or docker. And it's easiest to understand by looking at a picture. So, if you see on the left, we have three virtual machines. Each virtual machine is running applications, which is great, but it has to have the full copy of the operating system. So, in our case, Linux, for example. If we were running Eclipse Che three times.
And that can get expensive in terms of storage space, compute and maintenance. So, a more efficient method, for some applications, is to use application virtualization, which you see reflected on the right. And the hypervisor that's used for virtualization is replaced by the Docker engine, and you'll see there's no need for the guest OS to be put in each container. This can result in a tremendous savings. I find for my customers, it can be a 10X, a 50X, or even 100X savings, if you have huge scale applications.
So, there's a lot of interest in it. It is of note that the technology behind containers was in part developed by Google. So, they do have an offering around this. Their product is called Container Engine. And it is their implementation of Docker, although Docker is a separate company, so they call it Containers. For application virtualization. Now, in addition to setting up these containers, you have to manage them. And Google also has a service that's commercial called Kubernetes for container management.
All these things are beta as of this recording, but I expect they're going to go to GA pretty shortly. Let's head over to the console and take a look at these. So, in the console, you want to click on the Menu, and then you want to click on Container Engine. Now we can Create a container cluster or Take the quickstart. And I think the Quickstart is really a great way to learn the basics of containers in GCP, so let's do that. Now there is an advance tutorial that we're going to go through in a subsequent movie, but what I want to start first with is a more basic tutorial.
So, I'm actually going to cancel this one. And then I'm going to go to the tutorials by clicking this drop down menu. And I'm going to go here and click on Try Container Engine. So what we're going to be doing here is we're going to use the Google Cloud Shell Inline to deploy a prebuilt Docker container image with a simple node app on it. So, we'll be starting a cluster. We'll be deploying the app. We'll be using the Kubernetes configuration. And then we'll test the app, and we can remove it.
So we're going to go ahead and click Continue. Notice it's pointing to our first project here. And we're going to click Continue. Now what's happening at this point is source code is being cloned into a Google Get repository. And this source code is the basis of our application. So what we're going to do next, because we've got our source code. That was the easiest possible app I ever coded by the way. Kind of kidding around. And over here we've got a container. And you can see that this container is part of a cluster called Cluster One.
Now, the underlying technology is a virtual machine, but do you remember the difference between containers and virtual machines? For containers, you only need one operating system for n number of containers, so long as you have enough resources on the machine. But, don't be confused by this. Even though you're using containers, they are sitting on top of virtual machines. It's just you no longer are dealing with the hypervisor to control the different virtualized instances. You're now dealing with the container manager. So, if I scroll down, you can see that we're going to have three containers here.
And we can monitor this through Stack Driver. So, I'm going to go ahead and click Create. And then I'm going to click Continue. Now, this is going to take up to a minute or so, maybe a bit longer the first time to setup. Containers use caching and that's a great aspect of them, and it allows for quicker application scaling up and that's another reason people will use them. So, while we're waiting, we're going to look at the source code. We're going to click on Menu. And go over here.
And click on Development. And the reason we're doing this is because we want to understand the required configuration files. So in this case it's a node app. So we're going to click on Server.js. And basically this is just the functionality. It's kind of a Hello World, Hello Kubernetes. And then we're going to go back up, and then we're going to look at the Docker file. So if you've never seen a Docker file before, it's just a really simple text file where you put your configuration for your container.
And again, as we're getting started, this is the most basic of all containers. You basically say from node expose a certain port, copy the source file, and then run node to start the server. That's really all there is to it. The great thing about containers in addition to scalability, portability and more effective use of your virtual machine resources, is you have the setup or the configuration as code that you can check in and check out, so you can replicate the environments. In other words, it's easier to replicate an application than an entire VM.
So, this has uses across different industries. One I've been working with is bio-informatics, where you have researchers on their desktop that are doing some sort of research with a certain configuration and as part of the papers they're publishing, they're publishing a Docker container so that their research can be replicated. It's really exciting technology for many ways. And I'm going to click Continue. All right, and now we're going to go over to the cluster again to see if it's done. So, we're going to go over to Container Engine. It's still creating, so we need to wait for this to complete before we can deploy it and view it in the Cloud Shell.
So now you can see by the green check mark over here that the cluster is available. So, we're going to work with GCloud and we're going to do it with the included Shell. And we do that by clicking this button right here. Now, this is a great tool, that's available throughout GCP. The idea is, and you might remember from previous movie, you can have the Cloud Shell installed on your local machine, but what you're looking at here is a virtual machine, a new virtual machine, that Google has spun up that has the GCloud Tool and the SDK installed on it.
So what you can do is you can just work with this, and you don't have to worry about authenticating because you're automatically authenticated with your login. It's just faster, so I tend to use this. But some people do prefer the installed client. And we have that in a previous movie. So you could do it either way. So in this case, we've got the Cloud Shell open. And now what we're going to do is clone that sample code. So we're going to just copy this. Then we're going to paste this into the GCloud Shell and that's just going to copy the sample code. And then we're going to clone it into our own repository.
We basically got the code out of a Google source, so we need to put it in our repository so we can deploy it. Then we're going to switch to the tutorial directory. And now we're going to get the GCloud credentials for the cluster. So you notice before we were running basically Shell commands, other than the get command here. Now we're going to run a GCloud command. So you can see it's GCloud Container Clusters Get Credentials for our particular location. Another aspect of working in this shell that's great is it automatically picks up your defaults.
So your default project, your default region, so on and so forth, you don't have to set all those things. I find it to be faster. So, now we're going to use a Docker command. So we're going to use Docker to build and push the application the other way. Image. So we're going to go ahead, run this Docker command. And what this is doing is on the machines that this is running, it is going out and bringing down this image. Now, the image is going to be a whole lot smaller, 'cause it's just a node in this case, than if you were for example installing a Linux OS on each machine.
And this is part of why people use containers, because it's faster. So, now that's built. And so now, we need to push this up onto our cluster. So notice the command. GCloud space Docker Push. So that's a built image, and now we're pushing it up. And that's doing our application virtualization. So now we're running the Kubernetes, which is the container manager. So Kub e-c-t-l, basically Kubernetes Control, run this app, our Hello Node from the image.
On port 8080. Now we need to expose our container to the public. Because by default they're not exposed, otherwise we couldn't hit the website, of course. And now we need to list the services and look for the Hello Node service. Because we need to get the external IP address so that we can hit our website. So this is going to take a minute because we've asked to expose to the public internet, and so again, there's some sophistication of service integration going on here. You can see that the Hello Node right now has a cluster IP but does not have an external IP.
When you run the Kubernetes command to expose the deployment, that's going to ask GCP to assign to this container node an external IP address so that we can look at our service. And this takes up to a minute or so to get this external IP address assigned to our container. So now we have our external IP address. And that's going to be on port 8080. So that's going to be 104.155.151.8 And it's as simple as that.
So, we have our application up and running on the web, containerized, and so now, we can scale this thing. This is the beauty of containers. So right now we have a single container, and we want to scale this up. So let's use the Kubernetes command here. We have to break out of this. And then we're scaling the deployment up to four. And now we want to see if that scaling worked. So let's run the Kubernetes command to get the values for our current deployment and look at that.
Four. Wow, that was fast. Yep. That's why you use containers rather than virtual machines for applications that will run. And there we have the containers creating. And then we can modify and then we can update our application as well. But, I think that you guys get the idea of how this works and in a subsequent movie, we're going to actually work with a more sophisticated application that has front end web serving containers and back end database, so that you can start to understand the power and the reason for using the Google Container Service and Kubernetes Management Service.
Released
3/20/2017- Google Cloud Platform benefits
- Compute services
- Database and storage services
- Data pipeline services
- Machine learning and visualization
- Networking and developer tools
- Implementation solutions
- Architecture options
Share this video
Embed this video
Video: Use Container Engine: GKE