What even is a mainframe? I’ve been involved in software and infrastructure for 20+ years, I understand data centres, servers, services, microservices, databases, HA/SPOF, clusters and all the cloud equivalents, but never came across a mainframe. It’s almost a legend – are mainframes a real thing? What do/did they do? What’s happening to them? Where are they?
In: Engineering
Mainframe computers were essentially giant servers that users would connect to using a thin client / terminal. Computers were expensive so you would buy one large one and give users a monitor and keyboard (later a mouse) to use it. In the late 80’s and early 90’s the price of computers dropped so you started seeing more computers and less mainframes. Server clusters started replacing mainframes as it was easier to manage.
Before the cloud, which is millions of small servers running distributed applications, there were large multi-CPU systems that were essentially one platform running all storage/compute with multiuser access. These were mainframes.
It’s nice to note that these were often used as compute bureau services for corporates to run their applications because it was too expensive for every company to have their own mainframe. AWS and other cloud provider are just reinventions of these kinds of services. This is where ideas like virtualisation, multiuser access, and networking came from.
It’s a computer with:
* High redundancy
* High stability
* High Throughput
* Ability to hotswap any component (ie, change any component while the computer is still running).
Typically any server or other information processing unit with 99%+ uptime requirements use mainframe architecture.
P.S: Ok. Maybe not every high-uptime server these days. There have been a whole load of innovations for distributed computers which makes it easy to just…use more machines. But they used to.
Mainframes are a very specific type of machine, from a somewhat different time. Before the modern status quo of using a swarm of (comparatively) small servers, we had the era of one massive, individual server. And I do mean massive. IBM’s current [z16 mainframes](https://www.ibm.com/products/z16) can take up to 40TB of RAM.
Given you’re aware of the issues with single points of failure and high availability, it should be fairly obvious that having just one single server with everything on it is dangerous as hell, you’re one bad crash away from losing everything. Mainframes are designed around mitigating that problem: everything is redundant and hot swappable. Never mind hot-swapping discs the way you do now, or even RAM. On these things you can easily drain all load from one CPU to another so you can replace the CPU while the whole machine is still running.
Mainframes are basically servers that prioritise high availability over anything else. They support hot-swapping components without having to shut down the system. Mainframes also prioritise throughput over raw compute performance, so they are often used to handle things like batch runs, transactions and the like.
A good overview: [https://www.youtube.com/watch?v=ouAG4vXFORc&t=1228s](https://www.youtube.com/watch?v=ouAG4vXFORc&t=1228s)
Mainframes as a concept come from the era of computing that preceded the rise of personal computers (PCs) in the late 80s/early 90s. Back then computers were something you’d often only encounter inside institutions such universities or major companies, and such places might only have a single one. They weren’t devices you’d just use for everyday tasks such a word processing, but rather data processors designed to do calculations on huge sets of data and nothing else really. With maybe just one computer available in an institution with a lot of people wanting to use it, having just one workstation also was inconvenient, so there’d often be multiple Terminals spread around the place, all of which weren’t much more than a screen and keyboard that connected to the Mainframe and had no computing ability of their own.
In that sense, a Mainframe might sound like a Server, but where modern Servers are often just running generalised server operating systems (such as some flavor of Unix or Windows) that could in theory execute any program you could think of like it was a Desktop PC, Mainframes were and still are much more specialised and optimised for data processing specifially, and often more powerful than regular servers (though not quite to the level of a supercomputer).
There are still a lot of mainframes in use today, frequently because the tasks they need to carry out (such as processing all the daily financial transactions for a bank) havn’t changed significantly in decades, and its more risky to try and replace them with something entirely new (and less well-tested) than just maintaining the Mainframe. In general, modern Mainframes are engineered with an eye toward redundancy and reliability of the kind you’d want if it is important that the tasks it carries out do not fail, ever.
Have you used…containers, VMware, distributed storage, multi CPU systems, etc?
Thank a mainframe and unix, they were doing all of those things in the 80s. It isn’t surprising you never encountered one, they are hilariously expensive but they also were designed to run without shutting off for 25 years. You can’t run a Dell R series for a year without having to reboot it. They are engineered for maximum uptime and flexibility. You can literally replace processors while the machine is running.
In a modern mainframe, and they do still sell them from IBM, you can essentially run an entire datacenter including storage in one or a couple of racks. They have some innovations that make them much more efficient at high volume compute, the processor and storage BUS is far more efficient. So if you are NASA and you are running a model of the observable universe, your crappy java application on Red Hat + an HP blade isn’t going to do anything for you.
So, I was raised on an IBM mainframe Z/OS with TSO/ISPF. Mostly running SAS for an insurance company.
Others have describe the technical side, but working on it was actually kinda cool. You were able to do great word processing with IBM/DCF and print your documents on a laser printer capable of printing unproportional fonts. Man, that looked cool in a time years ahead of private dot matrix printers even being a thing.
And you could include your graphs from SAS/Graph too. Absolutely not WYSIWIG as your monitor would be a 80*24 monochrome [3278 terminal](https://en.wikipedia.org/wiki/File:MNACTEC_keyboards_(31123571395).jpg). So you’d print and then wait for the internal post system to deliver at your desk. Later I was promoted to a 3179G terminal.
Most people will agree that the 3278 keyboard has never been matched. Each key individually weighted. Solid and delightful.
Another typical pause in your work day was if your interactive job addressed a file or dataset on tape. Then you’d wait for the tape guy to mount the tape and (re)load the data to disk. Later that was done by robots speeding up the process.
And you could send messages to other users using the command “TSO Send”. Years ahead of email and text messages. Especially fun if you accidently texted every user logged on.
Latest Answers