From the course: Cisco DevNet Associate (200-901) Cert Prep 4: Application Deployment and Security

How to deploy applications in the public cloud

From the course: Cisco DevNet Associate (200-901) Cert Prep 4: Application Deployment and Security

Start my 1-month free trial

How to deploy applications in the public cloud

- [Presenter] If I write application code, that is intended to be used by other people, of course we need to deploy this application in a location that is accessible by others. In this section, we will focus on how to deploy our code in the public Cloud. Let's discuss the evolution of infrastructure, as it has a direct correlation to how we could deploy and manage our applications. The evolution of infrastructure has certainly been evolving at a breakneck speed. As such, we should understand and consider the security, scalability maintainability, monitoring and testing in each of these different scenarios. In the beginning, we have to build everything ourselves. We typically have to specify location in our office or building, to put our infrastructure equipment. This could be called the server room, IT closet or main or intermediate distribution frame. We need to build the physical facility, allocate appropriate power, design to build the proper cooling by the racks, design the network, install operating systems, cable everything up, and then finally, we could deploy our software. In short, we do everything. Then some company comes up with the idea of, hey, we could probably combine everybody's need for some facility, power, networking, cooling and just rent out portions of the facility to other companies. This will allow the customer to co-located their equipment and rent a space. The end customer would lose a bit of the control, but they gain a lot of cost savings and management overhead with the use of co-location. In this case, customers would not have to worry about the infrastructure but still own the servers, and they could still deploy the software on the server hardware that they own. We could think of the two models as to the equivalent of either building your own house, or renting a place. Right around 2005, the CPU vendors starts to add hardware virtualization support for the X86 systems. When we couple that with the maturing of virtualization software, such as XEN and VMware, they allow us to further divide the computer hardware resources into smaller sherbet chunks without impacting performance. Xen and VMware are examples of hypervisors. According to Wikipedia, a hypervisor is a computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called the host machine, and each virtual machine is called a guest machine. This sharing of resources could be further expanded by the container technology. We will learn about containers later in the course. For now we could think of containers as ways to further isolate and share computer resources. The infrastructure virtualization allows a whole new technology paradigm called Cloud computing to take place. When we talk about Cloud computing, we're really talking about aggregating hardware resources, such as servers, networking, storage and petition them into smaller chunks to be shared amongst users. These resources are generally provided on demand to the users. The users in this case, could be developers who need to deploy their code. They can make their requests automatically via web portal, CLI or other tools. Upon resource allocation, they could then deploy their code and their application. Know that in this case, the keys for the developers are on demand and no management overhead. One of the dominant Cloud computing component, is the Public Cloud. When we talk about public Clouds, we're referring to companies such as Amazon AWS, Microsoft Azure or Google Cloud, that allows us to provision our services over the internet on demand. The request could be made via a management portal, command line interface or software development kit over the public internet. We would generally only pay for the time when we're using these resources, and do not need to worry about managing the infrastructure. We could access to these resources in multiple formats. If it is just a software endpoint, we will refer to it as Software as a service, or SaaS If you're using a platform to deploy our code without concerning ourselves with virtualized resources, we call it Platform as a service or a PaaS. If the resources resemble a virtualized infrastructure, such as virtual machines or virtual networks, we could refer to it as Infrastructure as a service, IaaS. Here's an example from Microsoft Azure, on the different Public Cloud models that we have talked about. For us to deploy our applications, we can safely disregard Software as a service, because it is mainly for end users. As far as deploying code, we would either use Platform as a service or Infrastructure as a service. Between the two, Infrastructure as a service is by far more popular because it offers the right balance between flexibility and ease of use. Personally Infrastructure as a service in the Public Cloud is my primary way to deploy my application. One of the leading Public Cloud provider, is the Amazon AWS. Here's an example of the AWS management web portal. We could pick a service to deploy, such as a virtual machine, storage and network. Besides this web portal, we could also provision and manage the AWS resources via the command line or software development kit. If we think about our code deployment process, it generally consists of four stages: Development, testing, staging and production. Once we finish developing the software locally, we could use some testing to try and catch unforeseen errors, then we could deploy the code to a staging environment, where it could be tested by beta users. Once we're happy with how the code is running, in the staging environment, we could then push the code into a final customer facing production environment. In the case of Public Cloud in reference to these deployment stages, we could think of the development as genuinely done locally by the developer. The testing is done in a hybrid mode, both locally using tools, such as unit test and reserving Cloud resource for integration testing. Both the staging and production deployment, are completely via reserving resources in a Public Cloud. Generally, that staging environment is on a lesser scale than the production environment, because it does not handle as much load. Since the Public Cloud resource could be allocated on demand, this offers great flexibility to a code deployment process.

Contents