Computers technically don’t “need” a GPU. In fact, early home computers didn’t have them. The American NES and Japanese Famicom game consoles were one of the first to have GPU’s (then called the PPU).
The main advantages of this are…
1.) The GPU can process graphics information while the CPU handles other tasks, speeding up processing by acting as a digital carpool lane. In systems without this, programmers had to decide how much of a program’s processing went to executing code, and how much went to graphics and sound. Time spent on one of these was essentially taken away from the other two.
A common side effect of this is that older programs (particularly games) would frequently only use part of the screen, since fewer pixels or tiles meant less time needed for processing. Some early games on, for example, the Atari, would even remove lines on the left and right sides to further reduce graphics requirements.
2.) Because it only handles graphical data, the GPU can be optimized in ways the CPU can’t, allowing it to perform these calculations much faster than simply having, say, a second CPU running. This comes at the cost of being inefficient at, or even unable to perform, other calculations.
I remember reading a webpage/blog post where a 3D render was 30× faster when done with the GPU vs the CPU. This was ten or fifteen years ago.
TL;DR? It allows more data to be processed at once, and optimizes the processing of the particularly complex graphics calculations.
Latest Answers