What has changed in recent A.I. advancements that made it leap forward so much lately?

499 viewsOtherTechnology

With chatGPT and A.I. generated art and so on, it seems like A.I. is really getting some traction. What has changed that we couldn’t do that before?

In: Technology

8 Answers

Anonymous 0 Comments

If you’re talking about the last year then not much has changed. It’s just that it went from a niche community to mainstream. If you’re talking about the last decade then the big change has been in computing power and custom hardware for AI acceleration.

Anonymous 0 Comments

Advertising. I see this as a huge ad campaign. Every other post is for/against AI and I feel it’s being oversold.

Anonymous 0 Comments

As u/10133960aaa mentioned, computing power. But more specifically,I *think* it’s the use of GPGPU – using graphics card, which are very well suited for SIMO (single instruction multiple data), and performing some basic computations quickly (they can perform a very limited number of instructions, which allows them to be very small and thus enables placing more of them), to perform batch calculations in order to train the artificial neural networks, which what ChatGPT, among others, uses

Anonymous 0 Comments

The big breakthrough with ChatGPT specifically is the Transformer architecture which came from a paper written in 2017.

The reason why AI is getting good now is that they need to be really big with a lot of neurons, and they need to be trained on a lot of data (in this case basically the whole internet). Companies haven’t felt confident devoting the resources necessary until recently when smaller models started to show promise.

Also there was a debate on whether or not just making the current models bigger would yield better results or if we needed new ideas, and it turns out that almost magically AI does get better on its own if you just make it bigger to the point where we see today.

Anonymous 0 Comments

Two important trends drove a lot of the AI hype in the last 1-2 years:
– New big models have started producing results that are surprisingly good and accessible to lay-people. ChatGPT’s language output was much better than people expectations. Same with the AI art models. They generally aren’t good enough to use in any productive capacity, but they are better than many people expected, generating hype.
– Mobile growth has slowed at the same time that interest rates have risen – wiping out the profitability engine for the technology industry. There are a lot of very bright engineers (and very bright marketers) who have their careers in tech and without a clear roadmap to increased profitability, big tech and VC backers have pivoted hard into AI to keep tech stocks rising. Whether this leads to sustainable innovation and value-add to these companies has yet to play out.
#1 will play out quickly as expectations catch up, especially with how poor these models function in productivity-focused environments. (although I hope the Art models prove good enough to have lasting impact)
#2 will take longer – expect a lot of products from ‘ex-google, ex-apple’ that seek to find a new device that can be sold to consumers (to replace lagging smartphone sales).

Anonymous 0 Comments

It’s mostly marketing. “AI” is a buzzword that MBAs use to get money from financial backers or generate hype for their products. Unfortunately it works quite well because people like Elon Musk can sell people on his “self driving cars” that don’t really work, while driving Tesla’s stock price through the roof.

There are some things that have facilitated this hype:

More powerful computers able to handle and process large quantities of data.

Access to more data through various means (phones, watches, internet browsing, etc).

Some successful practical and easily accessible applications (image processing, language processing, etc).

But the effectiveness of AI is overstated. If you come from a math or engineering background you would recognize AI as a combination of system identification, adaptive estimation, statistics, linear algebra, and optimization. It is basically an interesting and in some cases useful application of mathematics from various fields. It isn’t some magical method that mimics human intelligence.

Anonymous 0 Comments

There’s nothing *particularly* novel about this batch of “AI”. It’s still a lot like the things people used to write for Twitter bots for fun a few years ago. The only difference is those were generally trained on a few hundred pages worth of data and the ones we’re looking at now are being trained on a few million pages of data, including and especially copyrighted data they have no legal right to use.

This kind of algorithm is really good at producing facsimiles of things it’s seen, so the more you let it see the better it gets. The old ones couldn’t “answer like my grandma” because it was very very unlikely they had been trained to know what that even meant. The new ones have.

Part of why it’s so hyped is the same reason NFT was so hyped: someone spent a lot of money to do this and they want to make that money back, but since it’s not particularly novel it’s not solving any new problems but they need to create the illusion it does. It still provides unsatisfying customer service, can still make errors in legal briefs that get lawyers disbarred, and cheerfully advises people to unalive themselves when it’s put in charge of hotlines for people with eating disorders.

Part of why that’s gaining traction is for companies who aren’t afraid of customers leaving because they have no choice (like healthcare, cable companies, insurance providers, etc.) it costs basically $0 to move your bad customer service to AI and that’s a lot of money saved vs. call centers. And big movie studios are chomping at the bit at the idea that they could use a computer to generate an entire movie from script to filming and owe nobody royalties. We didn’t really have the infrastructure to train on big data sets before, but now that we do we can automate those things people hate doing like singing or playing music or writing stories and they’ll have more time to do the things humans like such as working in a factory or moving boxes in a warehouse.

Anonymous 0 Comments

> With chatGPT and A.I. generated art and so on

I think it’s important to distinguish the text/art generation systems from serious machine learning research. The former have had vast amounts of money and computational power thrown at them in recent years because they capture people’s attention and seem interesting to the clueless rich people who decide where humanity’s resources are spent, but it’s not really clear that they serve any purpose, and they probably aren’t a significant step towards actual AI.

The serious research focuses on less glamorous tasks like facial recognition and spotting things in medical scans. It has been developing rapidly in recent years, due to a combination of increases in available computational power, various theoretical advances, and increased academic attention. But there hasn’t really been a massive sudden breakthrough like the media would have you believe.

Actual AI – something that could rival the intelligence of a human – still isn’t really on the horizon. People have only the vaguest idea of how it might be achieved.