docker, kubernetes, ci/cd in the lifecycle of APIs

156 views

docker, kubernetes, ci/cd in the lifecycle of APIs

In: 0

2 Answers

Anonymous 0 Comments

Originally, you had computers that could run software, if you wanted to run a certain type of software, you needed the right operating system, needed to install dependencies (Software that the software itself needed to run) and installed them on a physical computer.
Later, someone thought it would be easier if you were to use a virtual machine, basically emulating a computer on your computer (For example, simulating Linux computer on your Windows machine) and servers with enough power could handle multiple of these virtual machines. You could run a couple of these computers on one machine and didn’t need the physical hardware for each of them seperately.
If your software is small enough, however, the virtual machine added a layer of overhead to it, that it didn’t really make sense to emulate a whole operating system to run it. **Docker** (or containers more genrerally) are a way to basically create the minimal package for running your software. You will for example deliver the smallest possible linux distribution, bundle it with all the needed software (you might need say a database and ways to handle it, software for that would be included in that bundle) and add everything that you need to run your software and put it into one neat package. It is basically all you need, but nothing more.

This works greate if you have a small bit of software, but if you have larger applications that get more complex, that might not be enough. Maybe you need to have multiple instances of your software running, maybe somewhere one of your containers has failed, maybe there is a surge in incoming traffic etc. To not have to manually handle all of that, you need orchestrating tools like **Kubernetes**. If you see a Docker Container as this small bundle that can be run anywhere you already have docker installed, Kubernetes is basically the overseer for these little containers in any larger, multi-system network of interdependent containers. While each container knows “I am x, my task is y”, Kubernetes knows how many of x it needs, how x is working together with z and what to do if you currently need 10x the amount of y performed.

**CI/CD** isn’t technical in that sense, but more of a managerial style. In the past (and often also today) you had big releases for software, there might be a new version every couple of months or something like that. With CI/CD, you’re basically saying that you are constantly updating and deploying the software, having changes be delivered all the time, not just at certain times.
This has advantages as in you will be able to respond quicker, but it also has drawbacks, as the number of updates increases, meaning more administrative work and less time for testing and the greater possibility of shipping something that shouldn’t be shipped.

Anonymous 0 Comments

I’ll try to really simplify this.

**Docker:** let’s say you have a program and it runs great on your computer at home. Your neighbour though has a different computer running a different operating system and different versions of little things. Your program won’t work on their computer for some reason. Wouldn’t it be great if you could virtually run an identical copy of a specific computer setup on theirs so the program works like expected?

Kubernetes/K8s: let’s say you want a whole bunch of these artificial computer images to work together in a specific way. Maybe you’re making a web app that has a bunch of separate services that talk to each other. You *could* set them up each one at a time manually. Or, with K8s you could write out a blueprint that says hey, “Orchestrator, please make an environment that looks like my blueprint.” And it does. And even better it can detect when something stops matching (like a missing/deleted item) and restore it to matching the blueprint. If you want to add/remove things, just publish a new blueprint.

CI/CD is about a couple of key ideas: getting code changes into production as fast/frequently as possible, and doing this in an automated way. The reasons behind the former is a very large topic with entire books dedicated to it, but a simple reason for why is small, frequent changes have better outcomes than big bulk changes. You automate it because it’s faster, safer, and more reliably consistent than a human. It’s also a waste of human potential.