Why can’t we make graphic cards equipped with visual capabilities of 20 years from now, today?

265 views

So my question is while we know in what direction the graphic card industry will go in the next 20 years in terms of more realism, better shadows, textures etc etc, why can’t we produce those results today?

​

When I was like 15 years old (I am 29 now) people used to speculate how graphics in games would be in 10 years down the line, most of it has turned out to be true. So why do we have to wait for such incremental changes every year?

In: 0

8 Answers

Anonymous 0 Comments

If I know my plane is going to land in NY 6 hours from now, why can’t I just be in NY now?

Just because we know where we are heading doesn’t mean we are exempt from the process of getting there. Capacitors have to get smaller, algorithms have to improve, and power issues need to be dealt with. All these things take time.

Anonymous 0 Comments

We kinda are producing them already, they just aren’t being put out to the consumers yet because they take a lot of patents designing testing etc. plus if they’re cranking out GPU’s like hotcakes then they won’t get that sweet sweet overselling price.

Anonymous 0 Comments

We do know that games will look closer and closer to real life,

The issue is not making it look like real life. It is having enough frames/“horsepower” for it to run smoothly.

The closer we get to it looking like real life, the more horsepower is needed to run it. Essentially we need a way to emulate all photons, soundwaves, materials etc. We basically need to emulate life.

Scientists will keep pushing the boundaries slowly, because they will slowly figure out new ways to “cheat the system”, to emulate better without it being too demanding.

The latest efforts are DLSS, which fakes screen resolution by rendering games at a lower resolution and then upscales it using machine learning (AI).

Anonymous 0 Comments

There’s lots of pieces to the puzzle, and there’s no single thing you can do to make everything magically better. At this point pretty much everything has been invented, and it’s the matter of gradual refinement and improvement.

There’s also that you can’t go too fast, because huge changes are risky. For instance Intel tried to radically redesign computers with the Itanium and the idea just flopped. Modern computers are instead an evolution of an ancient design, improved a bit generation by generation. Intel thought they’d do something fresh, while AMD bolted on 64 bits to an old design and that turned out to be what worked. It didn’t require all software to be rewritten and it worked more than well enough, so it sold.

There’s also many different parts to the system — some people work on better memory, some on better GPUs, some on better buses, etc. Each of those gives some gains, but alone is limited in how much benefit it can offer. It would make little sense to say, put 32 GB RAM on a Voodoo card, it just doesn’t have the horsepower to make use of it anyway.

Anonymous 0 Comments

The fallacy is that, in broad strokes, it is usually possible to identify a direction and purpose for improvements over a long term. This does not mean that this can be broken down into the ACTUAL STEPS and ACTUAL THINGS that need to be done and what will happen going forward.

I know that you will most likely be 39 years old in 10 years time. Broadly speaking, I will most likely be correct predicting that your income will have increased and many other general outcomes for 39 year old people.

There would be no way to know what happens in specific detail. You will still have to live your life for the next 10 years.

Anonymous 0 Comments

Lack of technology, especially at scale. To build a better graphics card, first engineers have to come up with a design that improves upon the performance of current designs. As cards get more and more complex, and circuitry more and more “crowded” on a chip, engineers have to contend with physical limitations of current tech (e.g., quantum tunneling of electrons where circuitry is ever closer together) that impact performance/error rates. New processes and/or materials may need to be developed, tested and incorporated into the chip and card making process. This takes time and while the general performance trend is predictable, the means to achieve it is not.

After the new process/tech is created, it has to be implemented at scale. It is VERY costly to build a new (or update an existing) chip manufacturing plant, especially where novel materials or processes are involved. Upwards of $20 billion for a new chip plant with existing tech. And it takes time – time to design, time to finance, time to build, time to test, etc. And if graphics card design outpaces the development of other components, then at least some of those other components may need to go through the same process (new design, development, build, test, etc.) to realize the increased theoretical performance levels. That adds complexity and the potential for delays.

Also, the chip and card manufacturers are for-profit companies. If building an incrementally better chip year after year yields more profit than making huge leaps every now and then, then we’ll see incrementally better chips year after year. Might be some designs in their labs that will eventually make it into production, but not at the expense of profits. Never at the expense of profits.

Anonymous 0 Comments

My take on this is, that it is because there is no single person or company that knows how to build everything it would need to archive this on his/her/it’s own.

Anonymous 0 Comments

Hardware innovation will continue as it does but recently we are starting to hit some physical limitations given currently technologies. Breakthroughs are coming though i am sure.

One of the breakthroughs is leveraging software more and Machine learning in graphics. Example some companies are actually predicting pixel placement to take some of the processing power away from graphics cards. It’s incremental and hidden but software advancements will help squeeze the experience from each graphics card.