This video introduces the new concept and anatomy of ops components and discusses the decomposition trade-offs associated with microservices.
- [Instructor] Hi and welcome back to Decomposition with Microservices. So, in this video, we're going to take a look at what decomposition using microservices really means and it means components all along the software lifecycle. We will talk about the anatomy of something called Ops components and we'll cover some of the microservice decomposition trade-offs so let's get started then. So, microservices are really components all along the software lifecycle so the usual software lifecycle is you design, you build and you run your applications.
So, when you design your applications, well, we used to do this for quite a long time now, we have something called design components. Those design components, there are complexity units, data integrity units, feature units, and decoupled units. We used to do this for quite a long time now. Same goes for build for our development components. Usually those are planning units, knowledge units, development units, and integration units. But now there's something new and those new types of components, they're something called Ops components.
We have individual release units, individual deployment units, runtime units, and scaling units and this is something unique to microservices and unique to cloud-native applications. So, I introduced the term Ops component so what exactly is an Ops component? Well, an Ops component is an application that is packaged inside a container and that has several interfaces. It has an interface inbound and outbound that usually talks to some internet protocol like HTTP.
It has a starting interface and it has a diagnosis interface. All of this makes an Ops component. But there are some technology-driven constraints of course. Applications should not use the kernel space. They should not listen on random ports. They do not require any exotic operating systems and the used endpoints can be configured. So, what are the microservice decomposition trade-offs then? Well, if you look at the way from how you come from dev components to Ops components, there are several levels, right? So, the one level we all know is where you have one huge dev component, the system, and the Ops components is the usual monolith.
You have subsystems that go to macroservices, components to microservices, services to nanoservices. And the further you go down this pyramid, well, the more flexible you are at scale. You have better runtime oscillation. You have independent releases and deployments and usually have high resource utilization but at a cost. The further you go down this pyramid, you have more latency. You definitely have increased infrastructure complexity.
You have increased integration complexity and of course troubleshooting your complexity if things go wrong. And the interesting question now is, how can we handle this complexity that comes with the decomposition? And this is what we will cover in the next video when we will introduce the cloud native stack. Hope to see you there.
This course was created and produced by Packt Publishing. We are honored to host this training in our library.
- Implementing Go HTTP Server
- JSON marshalling and unmarshalling of Go structs
- Implementing a simple REST API
- Using Docker workflows and commands
- Building a naïve Docker image
- Running a containerized Go microservice
- Kubernetes architecture and concepts
- Deploying a Go microservice to Kubernetes
- Implementing service descriptors
- Performing rolling updates