Imagine you wanted to list every word in a book and count how many times it appears.
A regular computer you have at home would be like 4 people sat together splitting the workload, 3 taking a third of the book each, with one person needed to organise the whole activity (equivalent to the fourth core in the CPU being needed to run Windows).
With a super computer, it’s more like 10000 people. Each person individually isn’t much faster than the 4 you have at home (in fact, they might be a little slower), but the sheer number of them means they get through The Lord of the Rings in 10 minutes rather than taking 6 hours like the 4 you have at home since each person only has a paragraph each to count up.
The catch to this is that it is only truly effective when you have tasks which can be easily split up among lots of people, like counting words in a book. What if the task is more challenging and more difficult to spread across thousands of people?
For example, imagine the task isn’t to count words, but to count how many times the narrative theme of “loss of innocence” is referenced in the book. You can’t easily split that between 10000 people since one paragraph per person isn’t enough context to give a proper answer. It will probably be split by chapter, at which point you have 50 people doing work, and 9950 people sat around doing nothing.
This is why super computers are actually sort of bad at running the kind of jobs you run on your computer, like playing games. Not every problem the game has to solve, such as running an AI routine for an enemy, can be easily broken into 10000 separate chunks for calculating. Most of the super computer will be idle, and thus won’t be any faster than your PC or PS5.
Just like how your devices have multiple CPU cores, a super computer is just taking that to the extreme.
The same as is true for CPU vs GPU, where the CPU is designed as one buff processor to do big multi-role calculations, where the GPU is 100s or 1000s of very small processors designed to do only a few types of calculations. A way to think about it, is using division/multiplication, you can do both in more complex ways, with certain concepts like long division, or you can just keep adding/subtracting until you get the number you want. Both will get you to the same point, but one requires more initial brainpower, while the other requires a lot less split over a longer time.
You can technically do anything a supercomputer is doing, on a normal PC/Laptop/Phone (crypto is one example), but the reason a supercomputer exists instead, is primarily for it’s ability to communicate between it’s separate compute nodes (think cores, in a PC), at an acceptably high level, for any calculations relying on those calculations to be done, almost at the same time.
For situations where you don’t need the calculations to be done within a short timespan of each other (think the opposite of the weather node example u/sighthoundman used), it’s often more efficient to use Cloud computing. This is the same concept as a super computer, but the connection being limited to the internet, means it’s slower (latency), or slower (raw throughput), less secure, or less reliable (more prone to errors). Some of these are still done on effectively super computer type hardware, but split as more of a timeshare, and/or can be distributed to places where energy demands make it more economically efficient.
Latest Answers