AI research has been going on since the 50s, but it tends to go in steps where some new technique is discovered and there is rapid progress for a few years, then the limits of that technique are hit and there is [seemingly no progress for a long time](https://en.wikipedia.org/wiki/AI_winter).
With the current AI boom a lot of technology it uses was actually thought up in the 90s or earlier (deep learning, adversarial networks, convolutional neural networks etc). But the techniques for building AIs with them weren’t very good at the time so they only had limited usefulness.
The big step forward came when people worked out how to apply those ideas to huge datasets and huge neural networks using GPU parallel computing.
Once that happened suddenly people and companies everywhere could experiment with this new technique, and with so many people experimenting there was a sudden growth in AI like conversational AI, AI generated content etc.
Of course at some point we will hit the limit of this technique too. Currently conversational AI still tends to get confused, make things up, and straight up lie. AI generated content tends to make weird mistakes like not being able to draw hands. It’s not clear if the techniques we have will overcome these, or if we’re already close to the limit to what we can do and there will be another period of not much progress before a new technique is found.
Latest Answers