CI/CD are hot concepts in software development. In this video, learn what the terms mean and what the difference is between the two.
- [Instructor] Let's talk about CI, also known as continuous integration. I like to think of this concept as being just what the name implies. Integration, in this case, is the integration of changes into your code base. Integration that is continuous because it's being done by an automated system on demand. That might sound like a circular definition, so it might be easier to understand if we think about what the alternative to CI is. In CI, ideally, you merge right after changes are complete. The alternative is periodically having some kind of merge event at a predetermined time, when some poor developer is tasked with combining all of the changes that were created by the other developers for that sprint. In CI, you need to QA every change as soon as it's ready to merge. This requirement means that manual QA before merge is pretty much out of the question. The continuous part of CI requires that you have automated tests. This means that it's pretty easy to figure out what code change triggered a bug. Or, you can even require that tests pass before code can be merged. In non-continuous integration, QA happens once all the changes have been merged. That way, you can test how everything interacts and find any bugs. If you think about it, this is more efficient in some ways, especially if QA involves a lot of manual work. The problem is any bugs are much harder to track down because of how many code changes have happened. It's hard to know exactly where the bug is. In CI workflows, there's no inherent deadline to the process, it's much more driven by when the work is ready. Obviously, deadlines still exist. But in a CI workflow, often, those changes that happen right before the deadline are minor, last minute bug fixes and small cleanup changes. The major changes have been merged over the course of the sprint as the features were developed. Finally, the non-continuous way of doing things is heavily calendar and deadline-driven. All code changes need to be ready to merge by a certain date. Because the change in code is so big, it's pretty much scheduling a crisis for every release. If you've ever seen a documentary about archeology, there's a good chance you saw them sifting through dirt on an archeological dig. That sifting process is a good metaphor for how to think of automating testing. Each layer of testing is like a fine mesh sieve that catches smaller particles. They shake that dirt through a big sieve first to pull out larger things like rocks to throw away and large pieces of pottery to save. Then the medium sized holes catch the next level of artifacts. Eventually, the entire team can gather around the fine mesh to catch things like individual beads or tiny fragments of pottery and let all that unwanted dirt and dust pass through. The main point of this metaphor is that you wouldn't start sifting with the fine mesh; it would take forever and would probably get damaged by the big rocks and you can't have the entire team looking through the big stuff. It's the same with testing. You want to have your most expensive test in terms of time or computing power be last. Catch the big easy stuff first and then move on. I like to break testing down into three or four categories. This isn't a course in automated testing, so I'll be pretty general here. First is syntax testing, which is just making sure that your code is actually valid in the language you're using. Linting is similar; it's using a tool designed for the language you're writing to enforce a particular style. I call that one level because they're functionally similar, just testing the text of your code. You could also include static code analysis with a tool like SonarQube at this level. Then next level is unit testing. I also lump in integration testing here, but you could make that the level below. Unit testing focuses on individual units of code. For example, testing a function with various valid and invalid arguments and comparing that to the expected output. Integration tests are tests that focus on a larger scope. Does this feature work as expected, does the API call return the expected output, etc. Finally, acceptance testing is the fine mesh sieve of testing. These tests might be similar or identical to your integration tests. The important difference is that they're run in an environment that is as similar to production as possible and simulating real user behavior. Acceptance testing is all about simulating the real world scenarios that code might encounter. The reason that you want to have these separate levels is that tests should happen in a pipeline. If there's a syntax error in your code, it would be nice to find out within a few seconds of submitting your merge request instead of waiting several minutes for an entire suite of tests to run. Your tests should fail early and fail often. That is, they should quit as soon as something goes wrong so that you can quickly fix the issue and retest. They should fail often in that it's often more efficient to just run your tests than to meticulously look over your code for bugs before submitting a merge request. This also means that you need to put time and effort into comprehensive tests. But that investment will pay itself back many times over when it enables you to develop more quickly and trust that the code is good. I'd also add that writing code with a good solid CI system and tests is just more fun. It takes away a lot of the worry that some change will break things, which is very freeing.
- Navigating the GitLab interface
- Using GitLab for collaboration
- Merging requests
- Continuous integration and continuous delivery
- Creating and running a pipeline
- Deploying a project using GitLab