Eli5 – How are the graphics in sandbox games generated?

381 views

This is probably really dumb so I’ll try to explain what I mean.

If I’m playing BOTW, for example, and I walk past a tree so, from my camera angle, it is no longer visible, what does the computer do graphically to the data/information about that tree? I’m always just amazed walking round these incredibly detailed worlds and being able to move/change directions/change camera angles and the whole thing is usually seamless.

In: 0

4 Answers

Anonymous 0 Comments

Graphically the computer doesn’t bother to render objects that aren’t within the camera’s “frustum” or a pyramid projected from the camera. The objects outside the frustum are not rendered or “culled” and with this you get what we call “frustum culling”.

The computer is still aware of all the objects that aren’t visible outside the frustum, but they aren’t being rendered so it saves a lot of calculation that would be necessary to render them. Most of the game’s processing time per frame is the rendering steps.

Anonymous 0 Comments

> what does the computer do graphically to the data/information about that tree?

Literally nothing. The game takes the camera’s location, its angle and the tree’s location, and does a bit of math to see that the tree isn’t in the camera’s field of view, and then it just… Doesn’t draw the tree.

Anonymous 0 Comments

Usually, information about what is where with regards to environments is loaded in when the player is close by. Then the game calculates whether a given object is in the player’s camera view, and if it is, renders it. Rendering is by far the most computationally expensive part, so doing other math first to make sure you’re rendering only what the player needs to see is well worth it.

[This is what that looks like in Horizon Zero Dawn](https://media.giphy.com/media/xUPGcgiYkD2EQ8jc5O/source.gif), for instance.

Anonymous 0 Comments

Ive tried building my own rendering system before, and have also worked with optimization mods for Minecraft, so thats my only source, but here’s what I know:

So basically, the most primitive form of rendering takes an object and puts all of its points through a math equation. These points have 3 coordinate values and the equation maps them to a 2d screen. In simpler terms, it takes in 3 coordinate numbers, 3 camera position numbers, and 2 camera angle (you only need 2 angles for an entire sphere of rotation) and gives out 2 screen coordinates. (These are all the pixels on your screen)

I actually know nothing about how complex models are stored in files, because my only experience is Minecraft which is just simple voxel styled models. But the rendering program also can draw triangles between these and fill them in on screen, if it knows two 2d points it can easily draw a line, repeat that and fill it in for a triangle. It can also texturemap this and color it which is a whole pandoras box of problems and solutions.

But imagine how slow it would be to do this for EVERY single object. So optimizations are made to tell if certain objects are covered up by others or if they are out of the FoV of the camera. And games also implement render distance for this.

As far as what the game does with the data, it keeps it in the files containing every object. To draw on the screen it just reads from this,