What exactly facilitated the breakthroughs and avalanches in Al developments & services that didn’t exist prior?

441 viewsOtherTechnology

What suddenly existed that allowed these breakthroughs to occur that did not exist before? What made technology conducive to Al breakthroughs that could not have happened years ago? Was it faster processors, smarter programming from the coding community or something else? What suddenly happened that permitted these breakthroughs to happen?

In: Technology

9 Answers

Anonymous 0 Comments

There were theoretical advances in the mathematics and the programming, yes, but the secret sauce of AI is the huge datasets needed to train them.

The explosion of almost-free, nearly-limitless storage for your photos, writing, and everything else “in the cloud” created enormous nests of data that could be sorted and sold to companies looking to train their robots to do stuff like “draw me a dog riding a motorcycle.”

These datasets are also largely available to research students and others looking to build projects around AI. Some of it is free.

Anonymous 0 Comments

The lack of meaningful regulations on AI and ML. The corporations figured they could scrape the web, and utilize all that data without any repercussions. They were right. The current US government legislators are woefully out of touch with tech. Always have been, always will be. Add to that, lobbyists who effectively write the law on which they vote. Finally, the tech companies have ignored both copyright law, ownership of intellectual property, and capitalized on the glacial pace with which the law and regulations are applied to them.

All of this is by design, mind you, beginning with how Google was positioned as the premier search engine, Amazon used to develop analysis of buying patterns, Facebook/Meta getting people used to the ‘convenience’ of keeping in touch while giving away your rights as individuals, and all the other software companies and media making sure the subscription services (ie you own a right to access, not the thing itself) became the norm.

Anonymous 0 Comments

What helped our research is that people purposely went out and grabbed a truckload of data and aggregated it together free of use. Google “my thingy dataset” and you’ll likely run into at least a semi-learnable dataset to train AI on.

If you get a nice prototype, next step is to get more or contract out people to get it for you. With how much “stuff” we’ve collected over the internet years (and progressively even more), AI will become even better trained with time.

Another important thing is labeling. Image tags like woman, dog, and blue really help with downloading a massive amount of those tags. Labels are important to train AI in seeking certain characteristics. Many websites now have tags added into all data.

Anonymous 0 Comments

Two factors.

The first and biggest is the publication of the[ transformer model](https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)) paper. This paper was published in 2017. The recent AI technology breakthroughs are variants of this model.

Not long after, researchers noticed that transformer models scaled extremely well with training data compared to other models. They also noticed that it required and responded well to human feedback. Human feedback is needed to steer these networks to a workable state. It took a few years to build the tooling needed to both feed internet scale training data and set up large teams of humans to provide reinforcement learning from feedback.

Anonymous 0 Comments

Two things: data, and computing power.

DATA – People have uploaded images for as long as there’s been an internet. But in 2006, an academic called Fei Fei Li started gathering and labelling them. By 2009, she and her team had a dataset of 14 million images, along with digital labels that said what was in each picture. This meant that they could start a competition to see who could write the best algorithm to predict what was in each picture. This competition launched in 2010. And now there were stakes, and bragging rights, and a reason to really delve into those weird algorithms that the CS community had ignored (because AI would not happen in our lifetime, they believed). The dataset is called ImageNet, and it really kicked off the innovations in machine learning.

COMPUTE – It’s all very well to have data, and even algorithms. But they won’t do you any good unless you had a huge beefy computer. Not a desktop, something much more powerful. Fortunately, in 2006, the online bookshop Amazon.com started renting out its spare computing power to whoever wanted to rent it. So you could open a browser and choose how much RAM and computing power you wanted, and you’d get an impossibly fast supercomputer ready to execute your program. Then you could stop renting it the second your program finished running. So rather than having to buy a million dollar supercomputer and set it up, you could just rent it for a few dollars. This meant that machine learning became available to everyone who understood it, not just people with a lot of money.

As a result of easy access to data and compute, all kinds of breakthroughs started happening starting in 2011.

Anonymous 0 Comments

I’m really not sure there have been extraordinary breakthroughs or avalanches in AI yet.

So far it’s been a steady progression for the most part.

When a real AI breakthrough happens, you’ll know it – it will be a watershed moment that changes the status-quo completely. “STrong AI” or AGI, or something like that… that would be a breakthrough.

Anonymous 0 Comments

Powerful hardware designed for this task (combined with cloud computing for researchers), tons of data to scrap / process, and new / improved algo.

Anonymous 0 Comments

Processors indeed got significantly faster due to cryptocurrency mining industry incentivizing chipmakers to create exceedingly powerful chips, which is how Nvidia went from making GPUs to crypto mining rigs to AI chips.

Anonymous 0 Comments

The big architecture change from recurrent neural networks to transformers in the paper by Vaswani et al. is a pretty important milestone imo.