what is raytracing? I see this term a lot relating to video games and 3D animations

591 views

Thanks for the replies everyone, I think I get it now

In: 24

24 Answers

Anonymous 0 Comments

Historically, games and other 3D animations used a technique called rasterisation to draw objects. Rasterisation takes the polygons that make up a 3D object and works out how they look from the camera’s perspective, and how they fit into the pixel grid your screen uses to display images. It can then colour each pixel in based on whatever texture is applied to that part of the polygon.

What it struggles with, however, is lighting. Lighting in game engines has historically been essentially a series of elaborate “cheats”. Some of them are fairly straightforward – you can work out how well-lit a surface should be by knowing its colour, and its angle relative to the screen, and the distance to the various light sources around, and how bright and what colour those lights are, and so on, and you put all of that into an algorithm and it spits out a colour for that pixel in a fairly straightforward way.

Other effects are trickier, though. Shadows, for example, are pretty hard, which is why old games didn’t have real shadows, just a dark blob underneath the character. For shadows, the usual method is to take each light source in the area and project lines from that light source to each point on the object, to create a 3D volume. Anything inside that volume the computer then knows to render in shadow. One of the first games to use this technique was Doom 3, and they were severely limited by the number of light sources they could use in each area. Modern computers are powerful enough that shadows are no longer a major performance bottleneck, but 3D artists have put all that extra power to good use. Doom 3’s shadows were pretty much solid black with sharp edges, but modern shadows have had a lot of work put into them to make them have softer edges, or contact hardening, where the shadow gets softer as you move further from the object, or for them to react more realistically with other light sources in the scene.

Reflections are also a major issue. The typical way to render reflections is just to duplicate the entire room you’re in on the other side on the mirror. This is very expensive, and unlike shadows, hasn’t become much easier over time, which is why, unlike shadows, you’re more likely to see proper mirrors in old games, where the rooms are simple boxes and rendering a duplicate is very cheap, than in modern games, where rooms are complicated and detailed and rendering an entire duplicate one just to fill in a mirror would be impractical. That’s why you sometimes find “mirrors” in modern games that don’t have proper reflections at all, just a blurry, indistinct image that *implies* reflection more than actually depicting it.

There’s also ambient occlusion, which is where nooks and crannies on an object or in an environment get darker, because direct light is less able to reach them, and they’re more illuminated by diffuse reflections off nearby objects. That’s also essentially a cheat, where the surface is made darker based on how close other surfaces are to it.

Ray-tracing is an approach that circumvents all of these issues, by simply directly simulating the path of light itself for all the pixels on the screen. The rays can pick up the colour information of surfaces they bounce off or light sources they encounter, get dimmer with each bounce off darker surfaces, and perfectly reflect off shiny surfaces. It’s extremely expensive to set up, but the benefit is that once you do, you essentially get proper realistic lighting with minimal extra effort, no cheats required. Shadows? Well, if a ray bounces off a surface and then hits an object instead of the light behind it, that surface will just naturally look darker. Reflections? A ray bouncing off a mirror will hit an object in the room, so it’ll look like the object is behind the mirror, just like a real reflection would, and there’s no need to make a duplicate room just for a single mirror. Ambient occlusion? Any ray that enters a nook on an object will have to bounce around a few times to get out, and so that part of the object will look darker. Ray-tracing produces more natural feeling lighting than all the fancy tricks of the rasterisation era combined.

Historically, this technique has been far too expensive to use for video games (though it’s seen use in pre-rendered animation like movies for a long time), but over the last five years, new graphics cards have been released that are designed to handle ray-tracing in a real-time rendering setting like a videogame. High end games are frequently releasing now with support for ray-tracing, for those with the money to afford the new hardware. The old-fashioned tricks persist, though, for those of us who can’t afford it, or for games that don’t benefit artistically from realistic lighting, or games on platforms that don’t support ray-tracing.

Anonymous 0 Comments

Historically, games and other 3D animations used a technique called rasterisation to draw objects. Rasterisation takes the polygons that make up a 3D object and works out how they look from the camera’s perspective, and how they fit into the pixel grid your screen uses to display images. It can then colour each pixel in based on whatever texture is applied to that part of the polygon.

What it struggles with, however, is lighting. Lighting in game engines has historically been essentially a series of elaborate “cheats”. Some of them are fairly straightforward – you can work out how well-lit a surface should be by knowing its colour, and its angle relative to the screen, and the distance to the various light sources around, and how bright and what colour those lights are, and so on, and you put all of that into an algorithm and it spits out a colour for that pixel in a fairly straightforward way.

Other effects are trickier, though. Shadows, for example, are pretty hard, which is why old games didn’t have real shadows, just a dark blob underneath the character. For shadows, the usual method is to take each light source in the area and project lines from that light source to each point on the object, to create a 3D volume. Anything inside that volume the computer then knows to render in shadow. One of the first games to use this technique was Doom 3, and they were severely limited by the number of light sources they could use in each area. Modern computers are powerful enough that shadows are no longer a major performance bottleneck, but 3D artists have put all that extra power to good use. Doom 3’s shadows were pretty much solid black with sharp edges, but modern shadows have had a lot of work put into them to make them have softer edges, or contact hardening, where the shadow gets softer as you move further from the object, or for them to react more realistically with other light sources in the scene.

Reflections are also a major issue. The typical way to render reflections is just to duplicate the entire room you’re in on the other side on the mirror. This is very expensive, and unlike shadows, hasn’t become much easier over time, which is why, unlike shadows, you’re more likely to see proper mirrors in old games, where the rooms are simple boxes and rendering a duplicate is very cheap, than in modern games, where rooms are complicated and detailed and rendering an entire duplicate one just to fill in a mirror would be impractical. That’s why you sometimes find “mirrors” in modern games that don’t have proper reflections at all, just a blurry, indistinct image that *implies* reflection more than actually depicting it.

There’s also ambient occlusion, which is where nooks and crannies on an object or in an environment get darker, because direct light is less able to reach them, and they’re more illuminated by diffuse reflections off nearby objects. That’s also essentially a cheat, where the surface is made darker based on how close other surfaces are to it.

Ray-tracing is an approach that circumvents all of these issues, by simply directly simulating the path of light itself for all the pixels on the screen. The rays can pick up the colour information of surfaces they bounce off or light sources they encounter, get dimmer with each bounce off darker surfaces, and perfectly reflect off shiny surfaces. It’s extremely expensive to set up, but the benefit is that once you do, you essentially get proper realistic lighting with minimal extra effort, no cheats required. Shadows? Well, if a ray bounces off a surface and then hits an object instead of the light behind it, that surface will just naturally look darker. Reflections? A ray bouncing off a mirror will hit an object in the room, so it’ll look like the object is behind the mirror, just like a real reflection would, and there’s no need to make a duplicate room just for a single mirror. Ambient occlusion? Any ray that enters a nook on an object will have to bounce around a few times to get out, and so that part of the object will look darker. Ray-tracing produces more natural feeling lighting than all the fancy tricks of the rasterisation era combined.

Historically, this technique has been far too expensive to use for video games (though it’s seen use in pre-rendered animation like movies for a long time), but over the last five years, new graphics cards have been released that are designed to handle ray-tracing in a real-time rendering setting like a videogame. High end games are frequently releasing now with support for ray-tracing, for those with the money to afford the new hardware. The old-fashioned tricks persist, though, for those of us who can’t afford it, or for games that don’t benefit artistically from realistic lighting, or games on platforms that don’t support ray-tracing.

Anonymous 0 Comments

Right now lighting doesn’t “move” around, it is “pre-lit.” What this means is that if a light source is behind your back, you’d expect the shadow to be cast on your front, except that it may appear towards the light source (which is not possible in real life). Think of light like make up, it’s just kind of brushed into certain areas and textures to make it look like light is there.

Ray Tracing just means that the model of light they are using acts closer to a beam of light or laser, hence the “Ray” in ray tracing. In the old model, if you hold a mirror to a light source, it would not “reflect” that light, the mirror wouldn’t become shinier. The current model would make that mirror shiny as the simulated light projected by the light source understands that it travels out from the light source, and bounces off the surface of the mirror. As we know, for us to see, beams of light bounce off everything, and carry some residual energy as they bounce off surfaces to illuminate other surfaces as they bounce more. Ray Tracing mimics this by producing rays of light that bounce off objects in the animated universe which cause “dynamic” lighting. Which means if you move your character around, the light source has changed it’s position from you, so the light beams hit you different, and so the shadow adjusts to the appropriate angle, and the residual light bouncing off your character may illuminate other parts of the environment. Once you take this idea, you can apply it to everything in the environment. The tree branches move, and so the light beams hit them differently as the trees move which will change the shadows on the ground. Reflective surfaces are actually reflective, if you take a broken piece of glass, and toss it somewhere, the light beams hit an object, get reflected to the broken glass, then reflected into your “character’s eyes”.

I hope that makes sense for an ELI5.

Anonymous 0 Comments

Right now lighting doesn’t “move” around, it is “pre-lit.” What this means is that if a light source is behind your back, you’d expect the shadow to be cast on your front, except that it may appear towards the light source (which is not possible in real life). Think of light like make up, it’s just kind of brushed into certain areas and textures to make it look like light is there.

Ray Tracing just means that the model of light they are using acts closer to a beam of light or laser, hence the “Ray” in ray tracing. In the old model, if you hold a mirror to a light source, it would not “reflect” that light, the mirror wouldn’t become shinier. The current model would make that mirror shiny as the simulated light projected by the light source understands that it travels out from the light source, and bounces off the surface of the mirror. As we know, for us to see, beams of light bounce off everything, and carry some residual energy as they bounce off surfaces to illuminate other surfaces as they bounce more. Ray Tracing mimics this by producing rays of light that bounce off objects in the animated universe which cause “dynamic” lighting. Which means if you move your character around, the light source has changed it’s position from you, so the light beams hit you different, and so the shadow adjusts to the appropriate angle, and the residual light bouncing off your character may illuminate other parts of the environment. Once you take this idea, you can apply it to everything in the environment. The tree branches move, and so the light beams hit them differently as the trees move which will change the shadows on the ground. Reflective surfaces are actually reflective, if you take a broken piece of glass, and toss it somewhere, the light beams hit an object, get reflected to the broken glass, then reflected into your “character’s eyes”.

I hope that makes sense for an ELI5.

Anonymous 0 Comments

Right now lighting doesn’t “move” around, it is “pre-lit.” What this means is that if a light source is behind your back, you’d expect the shadow to be cast on your front, except that it may appear towards the light source (which is not possible in real life). Think of light like make up, it’s just kind of brushed into certain areas and textures to make it look like light is there.

Ray Tracing just means that the model of light they are using acts closer to a beam of light or laser, hence the “Ray” in ray tracing. In the old model, if you hold a mirror to a light source, it would not “reflect” that light, the mirror wouldn’t become shinier. The current model would make that mirror shiny as the simulated light projected by the light source understands that it travels out from the light source, and bounces off the surface of the mirror. As we know, for us to see, beams of light bounce off everything, and carry some residual energy as they bounce off surfaces to illuminate other surfaces as they bounce more. Ray Tracing mimics this by producing rays of light that bounce off objects in the animated universe which cause “dynamic” lighting. Which means if you move your character around, the light source has changed it’s position from you, so the light beams hit you different, and so the shadow adjusts to the appropriate angle, and the residual light bouncing off your character may illuminate other parts of the environment. Once you take this idea, you can apply it to everything in the environment. The tree branches move, and so the light beams hit them differently as the trees move which will change the shadows on the ground. Reflective surfaces are actually reflective, if you take a broken piece of glass, and toss it somewhere, the light beams hit an object, get reflected to the broken glass, then reflected into your “character’s eyes”.

I hope that makes sense for an ELI5.

Anonymous 0 Comments

**Old way:** Take a bunch of triangles, “project,” them onto the 2D screen (similar to how you’d draw a “3D cube,” on a piece of paper) and then color in the 2D triangles. There’s tons of smoke and mirrors, and things like surface lighting and shadows are two completely different techniques mushed together to make it look like it’s real lighting. Hacks on top of hacks on top of hacks, but we’ve managed to get things looking pretty realistic over the years.

**Raytracing**: You specify the physical properties of a surface (rough surfaces scatter light, shiny surfaces don’t, glass bends the light as it passes through, red surfaces absorb green and blue light, etc.) and then you just shoot light rays out and let them do their thing. All the hacks that we’ve had to develop over the years we basically just get for free with raytracing, and even some of the things like realtime bounce lighting, reflections, and glass refraction that have been almost impossible to do before (at least well).

It’s the last major step in realistic graphics. Problem is, it requires thousands to millions of rays to be simulated each frame, which as you can imagine requires a *lot* of processing power. Movies can wait days for a single frame to render, but games can’t, so it’s been used in movies for 20 years, but we’re just now becoming capable of running it in real-time for video games.

SOURCE: I’m a 3D graphics developer, AMA.

Anonymous 0 Comments

**Old way:** Take a bunch of triangles, “project,” them onto the 2D screen (similar to how you’d draw a “3D cube,” on a piece of paper) and then color in the 2D triangles. There’s tons of smoke and mirrors, and things like surface lighting and shadows are two completely different techniques mushed together to make it look like it’s real lighting. Hacks on top of hacks on top of hacks, but we’ve managed to get things looking pretty realistic over the years.

**Raytracing**: You specify the physical properties of a surface (rough surfaces scatter light, shiny surfaces don’t, glass bends the light as it passes through, red surfaces absorb green and blue light, etc.) and then you just shoot light rays out and let them do their thing. All the hacks that we’ve had to develop over the years we basically just get for free with raytracing, and even some of the things like realtime bounce lighting, reflections, and glass refraction that have been almost impossible to do before (at least well).

It’s the last major step in realistic graphics. Problem is, it requires thousands to millions of rays to be simulated each frame, which as you can imagine requires a *lot* of processing power. Movies can wait days for a single frame to render, but games can’t, so it’s been used in movies for 20 years, but we’re just now becoming capable of running it in real-time for video games.

SOURCE: I’m a 3D graphics developer, AMA.

Anonymous 0 Comments

**Old way:** Take a bunch of triangles, “project,” them onto the 2D screen (similar to how you’d draw a “3D cube,” on a piece of paper) and then color in the 2D triangles. There’s tons of smoke and mirrors, and things like surface lighting and shadows are two completely different techniques mushed together to make it look like it’s real lighting. Hacks on top of hacks on top of hacks, but we’ve managed to get things looking pretty realistic over the years.

**Raytracing**: You specify the physical properties of a surface (rough surfaces scatter light, shiny surfaces don’t, glass bends the light as it passes through, red surfaces absorb green and blue light, etc.) and then you just shoot light rays out and let them do their thing. All the hacks that we’ve had to develop over the years we basically just get for free with raytracing, and even some of the things like realtime bounce lighting, reflections, and glass refraction that have been almost impossible to do before (at least well).

It’s the last major step in realistic graphics. Problem is, it requires thousands to millions of rays to be simulated each frame, which as you can imagine requires a *lot* of processing power. Movies can wait days for a single frame to render, but games can’t, so it’s been used in movies for 20 years, but we’re just now becoming capable of running it in real-time for video games.

SOURCE: I’m a 3D graphics developer, AMA.

Anonymous 0 Comments

When you sit in a dark room and light a match photons immediately fill the room bouncing of objects in that room in ways that allow your brain to make excellent guesses about their size shape and position. Your brain is really really good at this which is why [this illusion](https://upload.wikimedia.org/wikipedia/commons/thumb/b/be/Checker_shadow_illusion.svg/1920px-Checker_shadow_illusion.svg.png) works so well (the two squares A and B are the same color).

A computer needs to do something similar if your video game avatar lights a match in a dark room so I can put shadows and illuminate different parts of the room in ways your eyes will consider normal.

A distant wall should be dimmer than a close wall, shadows should be on the opposite side of the light source. And things like that.

To do this a computer needs to know how the position and shapes of the objects in the room interact. One way to do that is to draw many straight lines from a light source around the room. That’s called ray (straight line) tracing.

Anonymous 0 Comments

When you sit in a dark room and light a match photons immediately fill the room bouncing of objects in that room in ways that allow your brain to make excellent guesses about their size shape and position. Your brain is really really good at this which is why [this illusion](https://upload.wikimedia.org/wikipedia/commons/thumb/b/be/Checker_shadow_illusion.svg/1920px-Checker_shadow_illusion.svg.png) works so well (the two squares A and B are the same color).

A computer needs to do something similar if your video game avatar lights a match in a dark room so I can put shadows and illuminate different parts of the room in ways your eyes will consider normal.

A distant wall should be dimmer than a close wall, shadows should be on the opposite side of the light source. And things like that.

To do this a computer needs to know how the position and shapes of the objects in the room interact. One way to do that is to draw many straight lines from a light source around the room. That’s called ray (straight line) tracing.