It mainly comes down to money, time, and workflow. Games don’t have the same operating funds to work with and ultimately the work put into lifelike graphics is too costly and time consuming. They are also faced with tech limits of the game engine they’re working with at the time they start production.
CGI in movies is rendered based on set motion of camera, so it is not dynamicly generated on fly. On top of that it is done on powerful graphic workstations that usually need hours to process all the graphics, light and shadow, particles and other effects.
In game when you have both dynamically generated view point and need for graphics to be generated almost instantly there is just not enough computing power available to reach level of movie CGI.
A moving image is actually made up of a lot of still images that change around 30 times every second. For a movie, we can spend pretty much as long as we want generating all the images we need. In fact some pictures in the original Toy Story took up to 24 hours to generate. Videogames are different, because they need to change according to how we play them. The images that make a videogame appear to move have to be generated nearly instantly, so there’s not as much time to generate complicated scenes
Mainly because of lighting. Up until the last generation of graphics cards, you couldn’t simulate realistic lighting on objects that move.
In a movie, since each frame is built specifically to show a single thing, the lighting can be custom tailored to each frame.
With RTX chips in place now, as the technology and skill of game developers using this technology increases, video games should start to look much more like movie CGI.
Latest Answers