what mainframe computer is and how it works

288 views

what mainframe computer is and how it works

In: 33

12 Answers

Anonymous 0 Comments

Mainframes are both an important part in the history of computers in general, and a specific, and rapidly disappearing type of modern hardware. Back when computers were first invented, they were huge, power hungry and very expensive. There was no way everybody was getting one on their desk, so organizations would get a small number of computers, put them in dedicated rooms / facilities, and their employees would sign up for time slots to run their programs on the computers.

Eventually computers got sophisticated enough that multiple peolpe could use them at once, and everybody got tired of carrying around stacks of punch cards, so terminals were invented. There were many different types of terminals, but all of them were essentially some variation of a way to display text and a keyboard, connected to a computer in a different room, but still fairly nearby using wires. Now, you have a mainframe.

As computers advanced, and hardware became cheaper, more powerful, and smaller, development reached a crossroads. Either you could use these advances in technology to cram more and more processing power into a centralized computer, or you could make a ton of smaller computers. Personal computers spawned from the latter path, but mainframe development never really died out.

With the market for personal computers becoming so much larger than mainframes, economies of scale kicked in and it very quickly became cheaper to get the equivalent amount of computing power out of a lot of smaller computers than one big one. The servers that power almost every site you visit really aren’t fundamentally different than the computers you’re used to using, they just have a lot of redundancy, and are in a form factor that’s more conducive to being in a datacenter with tens of thousands of other computers.

However, mainframes still have their applications. Imagine you’re running a stock exchange. It’s really, really important that trades that interact with each other are processed in the correct order. This is fairly simple to achieve if you just have one server. However, what happens when you have more traffic than a single server can handle? If you were doing something like streaming video, you’d just add another server, but if you try to add another server to your exchange, what happens if two conflicting trades get sent to different servers?

Certainly, you can make the servers talk to each other and synch up, but that’s both a lot of work, and very slow. What really would be great is if you could just have one big server, more powerful than any personal computer, and capable of handling insane volumes of real-time data all in one place. Well, that’s where a modern mainframe comes in. They are used for tasks such as handling airline reservations, and credit card transaction processing.

Why are they dying? Because regular servers are always getting faster, and the software industry is always getting better at making them work together. Servers based on PC hardware will always be a lot cheaper than mainframes, so if something can be done on 100 servers rather than 1 mainframe, eventually it’ll be converted.

Anonymous 0 Comments

When technology was new a company could only afford one actual computer, called a mainframe. Today this would just be called a server, but back at the beginning it took up an entire room and was expensive.

Every employee used a “dumb terminal” like a Digital Equipment Corp VT102 which had literally just enough logic inside to provide a text interface “on” the mainframe over a serial line (dial-up or hard-wired “nullmodem”). So instead of a network cable there were serial cables running to every desk. When actual desktop computers became a thing, everyone connected into the mainframe with a Terminal Emulator app over the same old serial lines.

So it was very similar to current “cloud computing” where everything actually happens on a server somewhere else and cheap low-thought “terminals” can then provide a “screen” but do no actual tough processing themselves. So then they don’t need a lot of RAM or a fast complicated CPU of their own and can be very inexpensive and simple to replace (user environment lives completely on the server, if the access device breaks you simply replace it with another clone, no personal data need be transferred or backed up, etc). The only real difference is that Remote Desktop provides full GUI while mainframes were text-interface only.