Software Rendering

193 views

How exactly does it work, and why do people say it’s “less resource intensive”?

In: 0

5 Answers

Anonymous 0 Comments

It means the CPU is doing the work that the GPU could be doing. It’s not less resource intensive unless you’re talking about some specific resource that GPU rendering uses. Software rendering is usually included as a fallback option in the event there are compatibility issues with the GPU or driver that prevent the hardware acceleration from working properly.

It’s kind of like saying a car is less resource intensive than a train. There may be some tasks where that’s true, but it’s not the case for the tasks trains get used for.

Anonymous 0 Comments

GPUs are highly optimized for doing specific types of math. If you create a game with graphics built around that type of math the game will perform better on that GPU. However taking advantage of that performance boost requires knowing which GPUs have which optimizations and tweaking your game graphics to take advantage of the specific hardware on which it’s running. Without those optimizations software rendering using the CPU will likely perform better.

Think of your CPU as a crossover suv, and your GPU as a customized drag racing car. The crossover is street legal, can drive to the grocery store, and can also drive in circles on a race track or drive straight ahead on a drag strip. It’s not going to win a drag race or a stock car race, but it can complete them and probably do better than a dragster in the stock car race.

The dragster, your GPU, can go amazingly fast in a straight line and will blow away the crossover if all you’re asking it to do was go in a straight line. If you try to take it to the grocery store or drive it on a circular track, however, the crossover will probably win.

Anonymous 0 Comments

We invented GPUs to be really, really good at the things that – up until then – only the computer processor was doing.

Basically, doing the same thing en-masse to a lot of data at the same time, e.g. matrix transformations, etc.

For a CPU, this basically was done one element at a time, and therefore doing it to a lot of elements (e.g. points in 3D space, say) would take a long time.

The GPU was invented – along with things like MMX and other instructions in more modern processors – to be able to, for example, “rotate all these 3D points 30° to the left” to 10,000 3D points and it would all take place much quicker. The GPU was basically designed to do lots of simple calculations, but to MANY, MANY bits of data all at the same time. The CPU was designed to do ANYTHING, including complex calculations, but to only a few bits of data at the same time.

For most things you want to do, the CPU is fine. For 3D graphics (especially nowadays because it’s so incredibly complex, and 3D scenes can have MILLIONS of 3D points that have to be manipulated, etc.) you want a GPU.

But, of course, GPUs move on and programmes get very complex and so sometimes the GPU you have just isn’t powerful enough or doesn’t have the features you need to render the newest graphics.

There you can “fall back” to making the CPU do the same calculations. Slower. Takes up for more CPU resources from everything else you want to do, but it’ll work and may even be able to do what your old GPU isn’t capable of. Normally your CPU would be doing other things (in a game, it will be handling the AI, the sound, reading input, processing the data of the game, etc. etc. etc.). When you make it do the graphics too, then it will slow everything down because it has too much to do.

GPUs used to be called 3D accelerators for a reason. They are specifically designed to speed up what the CPU could do itself – nowadays they are basically essential and even your basic computer has a GPU of some kind in it (we call it integrated graphics in that case) – and they handle things like 3D graphics, video decoding and physics calculations. All things that you can treat in the same kind of “do one simple thing, but do the same thing to ten-million data points at the same time” way, which a processor would struggle with.

And while the CPU can do ANYTHING, including the GPUs job, it will never be as fast as a proper GPU is at doing it.

Anonymous 0 Comments

Central processing units (CPUs) are very generalized. And by CPU I mean the actual regular processing cores that do all your general computing. They can process pretty much anything you throw at them — graphics, language modeling, word processing, real-time monitoring, game logic, whatever. There’s really nothing they can’t do. However, being so general purpose, they’re not necessarily all that fast at everything. So doing graphics on the CPU — software rendering — is going to be slow and energy intensive.

Graphics processing units (GPUs) aren’t generalized (although what they can do has expanded over the years). They’re only suited for certain tasks, like graphics and AI, but they can do it faster and with less energy than the CPU. But they can’t do a lot of stuff a regular CPU can.

Let’s try an extreme example like trying to crack an encryption key. That’s extremely intensive work for a CPU or GPU just to test one possible key, maybe millions of cycles used for each. But what if we build a custom chip that can only do one thing — possible key comes in and result comes out. All the logic circuits needed to test a key (all the cycles the CPU ran to implement the testing logic) are hardcoded into the chip, so it can test a key in one or two cycles, making it insanely faster. But that’s all it can do, nothing else. Something like this was actually made in the 1990s.

In fact, many modern processors have built in encryption modules so all the heavy encryption work is taken off the rest of the CPU. With everything on an iPhone being encrypted, the CPU having to do that work would slow everything down. Many mobile CPUs also have a reserved area for video decoding. That decoder can’t do anything but decode, but it does it way faster and using less power than the normal compute cores in the chip. That’s because all the logic necessary to decode video is baked in in advance.

In general, the more specialized you get, the faster you get, but that specialization narrows what you’re capable of doing.

Anonymous 0 Comments

Software Rendering (the CPU doing all the work) is to Hardware Rendering (The GPU doing almost all the work) as a single person manufacturing an item by hand from scratch is to an assembly line cranking out that item.

Sure, that single person represents less resources than an entire assembly line, but they are way less efficient and are going to take a lot longer to produce the same result. But if you don’t care about how quickly it gets done, you can always fall back on them to deliver the result you want.

That single person (the CPU) can do this because their entire job is to take a list of instructions on how to do something by hand from scratch, and they can produce any result that can be described by such instructions (this is basically what a program is).

An assembly line (the graphics hardware) on the other hand can only produce the limited set of results that the assembly line was made to produce, usually based on partially finished stuff given to it by the aforementioned person. But it’s really good at it because at each step where it does something, it’s really fast at doing that particular thing.