Eli5: How did we go from zero ai, to one company with an ai that holds a conversation, and then seemingly immediately on to multiple companies with ai that makes music, photos, and movies in such a short period of time?

1.68K viewsOtherTechnology

For real. One day there’s nothing. Then chat gpt shows up. And then the next day ai can create ANYTHING and there are a whole bunch of companies with software.

In: Technology

39 Answers

Anonymous 0 Comments

The same way we went from no planes to using planes in warfare in just 11 years. It’s always easier to improve existing technology or expand on existing products than it is to invent something completely new.

Anonymous 0 Comments

AI research has been going on since the 1950s. There have been a lot of false starts and stuff that didn’t work very well. The large language model of AI needed really fast hardware to be able to process the insane amount of data needed to train it. Once powerful CPUs and GPUs became available, stuff moved forward faster.

Anonymous 0 Comments

we didnt. People have been working on AI and even chatbots for 70 years. But in 2017 someone published https://arxiv.org/abs/1706.03762, which laid out a new potential architecture for a language AI, and openAI spent 3 years iterating on it slowly releasing more and more advanced GPT models. Here is someone playing around with GPT-2 as a chatbot https://new.reddit.com/r/artificial/comments/cfgpvh/i_tricked_gpt2_into_working_like_a_chatbot_here/

Around the same time, someone figured out an architecture for neural networks called a U-net. Turns out that it is great for creating pictures, you just have to figure out what picture to tell it to make. And since gpt models translate human speach to ideas, you can strap one to a U-net to tell it what to make.

At this point, all of the techniques and models are publicly available, and it is known that they are working at small scales. So multiple companies dump huge amounts of resources into developing them at larger scales. And turns out, that worked.

It was kinda overshadowed by chatGPT, but google had an internal chatbot that powerful about a year earlier https://nypost.com/2022/06/24/suspended-google-engineer-claims-sentient-ai-bot-has-hired-a-lawyer/

Now the techniques are STILL publicly available, so anyone who can get the training data and resources can make their own models.

Anonymous 0 Comments

The fundamental technology behind OpenAI’s ChatGPT has been widely published and, after its amazing success, has been widely copied. When someone develops a better mousetrap many companies will start building similar mousetraps. But it’s also important to realize that a lot of the companies you’re referring to are actually using OpenAI behind the scenes for their products.

Anonymous 0 Comments

The short answer is that the creator of chatgpt also created openai. It’s an open source AI. It opened the curtain for many other companies to learn from and once that happened it all snowballed. Additionally, they are using AI to develope better AI. That in itself has led to exponential advancements in the field.

Anonymous 0 Comments

A few of them were already refining AI models, but once one was released, the others had to release theirs. There are a lot of AI products that are secretly just GPT with a different sticker. Probably the biggest group is just calling any semi-complicated algorithm an AI because it sounds cool.

Anonymous 0 Comments

We needed social media to come first to get the training data, that and computing power were the main obstacles.

Anonymous 0 Comments

The short answer is that AI is an ambiguous word. It used to mean “computers doing whatever computers cannot do”. But at some point semantics changed and it became “anything involving chatbots”.

The underlying technology like deep neural networks and even transformers and attention networks have been in development for a long time.

Anonymous 0 Comments

AI research has been going on since the 50s, but it tends to go in steps where some new technique is discovered and there is rapid progress for a few years, then the limits of that technique are hit and there is [seemingly no progress for a long time](https://en.wikipedia.org/wiki/AI_winter).

With the current AI boom a lot of technology it uses was actually thought up in the 90s or earlier (deep learning, adversarial networks, convolutional neural networks etc). But the techniques for building AIs with them weren’t very good at the time so they only had limited usefulness.

The big step forward came when people worked out how to apply those ideas to huge datasets and huge neural networks using GPU parallel computing.

Once that happened suddenly people and companies everywhere could experiment with this new technique, and with so many people experimenting there was a sudden growth in AI like conversational AI, AI generated content etc.

Of course at some point we will hit the limit of this technique too. Currently conversational AI still tends to get confused, make things up, and straight up lie. AI generated content tends to make weird mistakes like not being able to draw hands. It’s not clear if the techniques we have will overcome these, or if we’re already close to the limit to what we can do and there will be another period of not much progress before a new technique is found.

Anonymous 0 Comments

It’s mainly marketing. AI reached a point where companies felt confident marketing it. But it has been around for a while. Companies are openly working on it and training it for years and chat bots, back end AI processing and other consumer functions have been around for a while but we usually commonly called them “algorithms”.

Now that AI is the new hotness and driving the stock market every company out there is trying to integrate it into their pitch for their products and services.