we didnt. People have been working on AI and even chatbots for 70 years. But in 2017 someone published https://arxiv.org/abs/1706.03762, which laid out a new potential architecture for a language AI, and openAI spent 3 years iterating on it slowly releasing more and more advanced GPT models. Here is someone playing around with GPT-2 as a chatbot https://new.reddit.com/r/artificial/comments/cfgpvh/i_tricked_gpt2_into_working_like_a_chatbot_here/
Around the same time, someone figured out an architecture for neural networks called a U-net. Turns out that it is great for creating pictures, you just have to figure out what picture to tell it to make. And since gpt models translate human speach to ideas, you can strap one to a U-net to tell it what to make.
At this point, all of the techniques and models are publicly available, and it is known that they are working at small scales. So multiple companies dump huge amounts of resources into developing them at larger scales. And turns out, that worked.
It was kinda overshadowed by chatGPT, but google had an internal chatbot that powerful about a year earlier https://nypost.com/2022/06/24/suspended-google-engineer-claims-sentient-ai-bot-has-hired-a-lawyer/
Now the techniques are STILL publicly available, so anyone who can get the training data and resources can make their own models.
Latest Answers