We didn’t, it just looks that way if you don’t know the nuts and bolts of it. The ~*social media algorithms*~ buzzphrase of the 10’s were primarily powered by deep learning neural nets, much like ChatGPT. Of course there is still quite a bit of difference between Facebook/Twitter’s algorithm and ChatGPT, but just because the AI is more immediately obvious doesn’t mean it’s necessarily more (or less) powerful (see: [Facebook and Myanmar](https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/))
An AI chatbot as powerful as ChatGPT was a pretty shiny new toy to hit the market, so a bunch of work that was previously being handled more cautiously was now being rushed or just left unfinished to get a product out, regardless of the potential consequences. The spike has been more in market activity than in AI development.
The AI rollercoaster we’re on right now could’ve been controlled if governments had responded to Web2.0 properly back in *the 00’s*, but that ship obviously sailed a long time ago.
Companies have very strict compliance requirements post Enron; everything needs to be retained or the presumption is assumed it was deleted.
These same companies are jumping at AI and CoPilot.
When court ordered discovery dictates access to this info, scapegoating is not going to be a “sacrifice on the alter of justice” it used to be.
Don’t be scared of the implications of AI, pay attention when they look to restrict it
Technological advances work that way sometimes.
What happens is that lots of people are working on the same thing for a long time without much progress. They get close and might have different ideas about what the thing could be used for but they just can’t get it right.
Then someone figures out how to get it right. Once that happens others who have been working on it find out what they were doing wrong and soon you have many different versions as they complete their versions.
AI has been around pretty much since computers were invented (more or less) you just don’t think of early AI as “AI” but remember clippy? The paperclip that never went away? that was very rudimentary AI.
Same thing with things like autocorrect and what not, it’s all artificial intelligence that learns based on pooled data from everyone and then gives a result
As others have mentioned, AI reserch has been going on for years.
That said, why we suddenly went from 0 to 100 with AI is basically a lot of companies had similar AI projects to like chat GPT but it was kept behind closed doors.
A good example of this is google’s bard. It was roughly as good as chat GPT 3.5 when Chat was first released to the public. The reason why google kept it behind closed doors was they wanted to go slowly, and not cause “harm”.
When ChatGPT3.5 was released… basically the cat was suddenly out of the bag. So everyone started to release thier versions and research.
Latest Answers