Here Im mostly asking about PC games, as the full 3d era in console gaming was pretty much started with the PS1 launch (December 1994) and the N64 launch (September 1996).
Case in point is two of my favourite games, Star Wars Dark Forces (February 1995), and Dark Forces 2 (October 1997), pretty much the same formula, but totally different technical capabilities.
Dark Forces was solidly lumped in with the Doom era of games, being 2.5D. Basically the environment was 3D, enemies were rendered by a 2d billboard sprite, and for Dooms case, all levels were essentially on a 2D grid, with the appearance of raised ceilings and uneven floors essentially kludged into the engine. Dark Forces slightly expanded on this by somehow adding in the ability to have multiple levels (is it only 2 different vertical levels or more?) and the ability to pan looking up and down ([although this again seems to have been a hotfix to an inherent issue in raycasting engines](https://en.wikipedia.org/wiki/File:Camera_Rotation_vs_Shearing.gif)).
So then a little under 3 years later Dark Forces 2 is released by the same publisher, you can do pretty much everything you can in a normal game engine, look in any direction, completely 3d environments, and the graphics still look passable even now.
I get that there are some technical hurdles to cover between 2D games and full 3D, particularly without a graphics card (first hitting the market in 1999) to reduce the performance issues with rendering only what is in view (occlusion I think?). What I dont get is how the technical issues were solved so quickly between 1995 and 1997, and in particular why the 2d grid necessity went away so quickly.
In: Technology
Wait this is 2.5D? I thought 2.5D was strictly games rendered in 3D but with 2D movement like platformers. Wikipedia says it is both, but it seems confusing to me since they’re very different and only one of them is still commonly used.
I always call it original Doom or Wolfenstein 3D type graphics but that is not concise.
Games did indeed go directly from 2D to 3D, but the early 3D games were wireframe or flat shaded polygons. 2.5D didn’t come until many years later. Texturing took a lot of CPU and the 2.5D
technique was a clever compromise that allowed fast drawing of limited textured polygons.
Early 3D games:
[Battlezone (1980)](https://youtu.be/ymrYkbEbnEQ)
[I, Robot (1983)](https://youtu.be/gmvWxG2zvs8)
[Elite (1984)](https://youtu.be/7JCU4Hulgcg)
[Mercenary (1985)](https://youtu.be/LnZFs49dwS8)
[Driller (1987)](https://youtu.be/V1vRXrzmshc)
[Carrier Command (1988)](https://youtu.be/rF9sZUv23yM)
[Starglider 2 (1988)](https://youtu.be/sg3uQM83dl8)
[Stunt Car Racer (1989)](https://youtu.be/q7w_0yP5RwU)
[International 3D Tennis (1990)](https://youtu.be/GW01dGjjgu0)
[Hunter (1991)](https://youtu.be/MzByYlnvaN0)
Most of the answers so far are talking about the lack of 3D rendering hardware at the time as the cause. This actually wasn’t the issue at the time.
The issue wasn’t how quickly to render the polygons; the issue was how do you determine which polygons you need to render. And that is not a graphics card issue; that is a CPU issue. In all games that have a moving viewpoint, from way back in Battlezone all the way to today’s best looking games, the computer needs to rapidly filter out what it can ignore and not even send to the video card.
Carmack and iD made the breakthrough of using Binary Space Partitions to *instantly* know the exact list of walls they had to consider rendering based on where the camera was located. [Ars Technica has a great article going in depth on it, probably at a higher-than-ELI5 level…](https://arstechnica.com/gaming/2019/12/how-much-of-a-genius-level-move-was-using-binary-space-partitioning-in-doom/)
**At an ELI5 level, what is Binary Space Partitioning in a 2.5D game?**
Simply, the level is looked at from the top, so processed on the basis of the “floors” of the model. The computer splits the the floors into triangles on the basis of what walls can be seen when the camera is on the triangle. Depending on how varied that list can be, the triangle can be cut into further smaller triangles, so the list is as close to totally accurate as possible. (Imagine one giant rectangular room with 10 hallways coming off of it. Depending on which hallway you are looking down, the walls will be vastly different. So the giant room gets cut into many many triangles rather than just 2). This processing takes a very very long time, and is saved with the level. It is not done on the player’s computers.
Additionally, the use of triangles mean the game never needs to calculate “which triangle am I on?”, because when the triangles are built, they also save what triangle is on the other side of each edge. So when you move from one triangle to the next, the game already knows “crossing edge B of Triangle 12 moves you to Triangle 16”.
**Oh yeah, and additional ELI5: What does Binary Space Partitioning mean?**
Complex word and complex science, but actually very easy conceptually:
* Binary: Means two options, “on or off”, or “yes or no”.
* Space: On the map/level
* Partitioning: dividing
So, Binary Space Partitioning means “dividing up your level so you know if you are in front of or behind each wall”
When the first “3D” engines were created, there were no 3d acceleration. So doing actual 3d were not possible.
With the ID Tech 1 engine, they basically made a 2D engine but flipped the textures for the walls to make it look 3D.
There are a bunch of videos that explains how it works and why it was revolutionary. It is thoroughly fascinating.
What I loved was the 2D background with 3D characters on top kind of games that worked around the early limitations in processing power. Games like [Final Fantasy VII](https://en.wikipedia.org/wiki/Final_Fantasy_VII) and [Silver](https://en.wikipedia.org/wiki/Silver_(video_game)).
Silver can be bought on GOG and has aged very well in my opinion.
Doing true 3D polygons requires a lot of floating point operations (the PS1 did use intermediate integer shortcuts on top of dedicated floating point hardware which is why PS1 games have that distinctive warped perspective and shimmering) but in 1995 the target for PC games was still the 486. The fact that the makers of Dark Forces demanded a whole 8MB of RAM when there were still many people with only 4MB drew some interesting complaints at the time.
Unfortunately the big performance difference between the existing base of 486 users and the new Pentium adopters was the processor’s floating point performance. That’s why Quake (1996) required a 75Mhz Pentium and even on a 100Mhz 486 would run at about 10fps (I know because I only had the 486 at home so it was easy to compare just how big a difference the Pentium’s floating point unit made). So the two advances that allowed for 3D games where processors with strong floating point performance (Pentiums) and dedicated accelerators cards that could do many floating point operations in parallel (even better than a Pentium).
By comparison 2.5D games required no floating point operations and could use small integer based lookup tables. 2.5D games are based on a very simple ray casting. For each vertical line on the screen the game projects a ray along the 2D overhead grid until it hits a wall. It then takes the distance to the wall to determine which scale lookup table to use and draws an appropriate vertical strip of pixels taken from that walls texture, which unlike 3D texturing aren’t from arbitrary points but rather a simple vertical line using that lookup table to know when the duplicate (stretch) or omit pixels to draw it at the correct scale.
Well, what it mainly comes down to is that PCs of the time didn’t have dedicated 3D hardware the way they do now, or the way that the 1994 Playstation did–it wasn’t until the release of the 3Dfx Voodoo 3D card in 1996 that this capability really started to become mainstream in gaming PCs. So, any 3D in a game had to be rendered entirely using the machine’s CPU, and ray casting engines like the ones used in Doom were simply a far more efficient way of rendering 3D, albeit they took some shortcuts in order to do that.
I actually remember playing through the whole of the original Tomb Raider (1996) on the PC in software rendering only, and the huge difference it made once I got a 3D card and ran it via that instead.
Marathon (Bungie Software) for Mac in 1995 was fully 3D, as were Marathon 2: Durandal and Marathon Infinity (96/97?)
Better graphics, better gameplay, better storyline than pretty much all of the other 3D games of the era. Better than Quake.
Mac only though, other than Durandal, and the PC port of it wasn’t as polished and didn’t run nearly as well as on a Mac of the same era in my opinion.
Most people weren’t introduced to Bungie until Microsoft bought them out to make Halo into the Xbox exclusive killer game everyone HAD to have.
That said, if you ever get an old Mac emulator running and want to play some of the best gameplay of the 90’s, Pathways into Darkness and the Marathon series are both really great gaming experiences.
Pathways shows it’s age, but the story and puzzles and figuring it all out is incredible.
There is a “beautiful” algorithm called ray casting which is at the heart of many “2.5d games”. There was a period in time where ray casting was the most effective way to do real time 3d with textures. It started becoming popular 1992 (Wolfenstien), and was uncommon around 1998. With Moores law, there was about an 8 fold increase in computing power in that period. There also happened a massive improvement in floating point computation capacity during this time. This meant the end to the niche where ray casting was relevant.
If you wan’t to know about the algorithm [https://permadi.com/1996/05/ray-casting-tutorial-table-of-contents/](https://permadi.com/1996/05/ray-casting-tutorial-table-of-contents/)
There are a few facets to this question.
I will focus on the technical limits and programming techniques used at the time.
Technical :
The physical limitations of processors
The physical limitations of memory storage / retrieval
Programming : (the human factor)
Procedural
Dynamic
Random
For physical limitations of hardware; cpu, memory, etc
We can look to Moore’s law for answers.
https://en.m.wikipedia.org/wiki/Moore%27s_law
Basically, the number of transistors in a dense ic doubles every year.
This is, imo, secondary in the answer to the question.
The primary drive being human want over human need.
Example : We did not need a 3d dedicated processor to land on the moon. It certainly would have helped.
Back on point. Cir. 1958
Tennis fo Two
The first game was 2d, static field (world), static sprite / player dimensions with a static player path and predictable dynamic sprite path. Amazing to think that this was done with an analog computer with an oscilloscope display.
It would take 14 years to reproduce this in digital.
Thank you Higginbotham.
From T4T to Pong 14 years elapsed.
From Pong (1972) to the first 2.5d game Interceptor only 3 years had passed.
Side note: Pong also initiated the first patent infringement suit related to what would coin the phrase “art of video games”
What made the leap from Pong to Interceptor possible was the jump in cpu capabilities or more specifically the number of transistors and how many operations per clock cycle, known as frequency, could be attained.
Additionally the introduction of dedicated support processor s.
In the Atari computer line.
It’s what I am more familiar with.
Main cpu is the 6502 support processors are
Pokey and Antic.
POKEY handling input via keyboard, joystick ports, and serial input such as tape or disk drives or modem.
ANTIC dealing with graphics management.
In PCs these are handled by similar components.
The motherboard cpu, math co-processor, North Bridge, South Bridge, and daughter boards; sound card, video card, I/O card etc.
Fast forward to current day and all of the daughter boards and mathematics are combined with the motherboard.
With the ability to override and use dedicated expansion boards.
All of these physical aspects are demonstrated in Moore’s law.
The physical limitations are being expanded on due to the want of better, faster, more realistic gaming experiences.
The side effects of striving to meet this want is a need for better cpu capabilities.
Meeting these demands allows for even better designs for cpus due to advances in material science due to advances in the cpu driven by the desire to have better games.
Which accelerated Moore’s law. Till we hit a material limit again.
Now look at programming techniques.
As cpu capabilities climb programming is advanced due to the increases in memory storage/retrieval and the tools available to programmers.
From static pseudo 3d to actual 3d, only available memory and speed of access was the limiting factor.
With more memory we can increase the complexity of a process. If we can access this faster we can display data faster. If we can display data faster, we can display more complex data.
Note: every pixel displayed is a piece of data stored in an exact spot in memory.
The more memory the merrier.
More complicated displays of data require faster processing so that is why we still have dedicated expansion boards such as Nvidia and AMD GPU cards.
The cards are actually full on computers with thousands of processor cores.
They now deal with all the heavy lifting of the graphical display of data including the mathematics associated with the manipulation of the data.
So now we can build sprites that not only have presence in 2d x, y we can construct a sprite map that has x, y, z with variables in every aspect of x thru y and z.
This enables a player to form a ‘okay’ sign with their hand and another player shoot another sprite ‘bullet’ with its own xyz map through the gap made by the fingers of the player.
A more complicated hit box enabled by more complex integrated circuit design enabled by more precision in design and manufacturing processes enabled by the want of a better, faster game.
Not long from now, the 3d game will need to move from vr headset to a more physical ‘holodeck’ immersion experience.
Thank you quantum computers due to Moore’s law due to want over need.
So basically the answer to the question is;
Available ram and time to display data due to lack of processor capacity.
I hope this helps.
Latest Answers