In 3D engines, why do spherical objects get split into polygons instead of being defined as spheres?


I was wondering about this as I read myself a bit deeper into Ray Tracing. Spheres are easier to calculate with compared to other three-dimensional objects which is logical because if you want to check a collision with an object that’s not a sphere, you’ll have to check collision with every single polygon.

Which made me think: Why do spheres not get defined as spheres? Sure, traditional rendering works different from Ray Tracing, but shouldn’t it save processing power as well if I have one sphere in comparison to 256 polygons making up a sphere?

In: Mathematics

It’s inherently easy and fast for a graphics card to draw 3D triangles; this is the basic unit of 3D graphics.

The math for spheres is much slower.


Many 3D engines are designed for general models, where exact spheres are rare. While the math for an exact sphere may be nice, other smooth surfaces are much more difficult. So another possible reason to not include that is simply because it doesn’t make sense to make so much effort implementing the software (and hardware, possibly) for something that really won’t be used all that much. Besides, engines have some tricks to make polygonal surfaces look smooth.