I’ll start at the beginning: hardware wasn’t always the same from computer to computer. In fact each major vendor had entirely different hardware and very different ways of doing things. Unix was a set of programs that did typical stuff like changing directories, listing the files, etc. that all computers needed to do. They all had to have their own kernel to talk to their specific hardware. I mean… they all had their own processors and everything! It became cost prohibitive for these companies to maintain such software. So they adopted the standard set of tools. I’m going to hint at “open source” here..
Enter a grad student in Munich named Linus. He wrote a kernel, Unix (posix, really) compliant, for general purpose IBM clone hardware… called it Linux. Pretty cute. That hardware, pretty close, owns 100% market share, now. Linus hasn’t stood still and has adopted his kernel, of course. He’ll go down as one of the most influential software developers of all time. He and his team also wrote git, because they were peeved at centralized source control systems… that’s a particular breed of person with guys like that, Stallman, etc. I’m talking hallowed software guys here. True pioneers. These guys would write a tool because they thought the exercise would be fun and because it might save them time and money later. They published it as “open source”. Anyway…
So what does a kernel do? There’s a solid definition of a few technical things like manage memory, boot, communicate with hardware via firmware, schedule processes with the processor, prioritize jobs, handle interrupts, etc. It does NOT do much you’ve ever dealt with. It’s very low level. All the stuff you’ve ever seen likely resides in the tools on top of the kernel.
I digressed a bit into open source because the OG Unix machines were hundreds of thousands of dollars. Open source democratized computing, Linux democratized server hardware and really helped birth the Information Age. Kinda cool.
Latest Answers