Why can’t we use an a.i to improve current products.

577 views

Like for example batteries, could you not give a supercomputer sterile physics information with boundaries mean for making batteries.

Of course a lot of ludicrous and mundane stuff would come out but also many interesting ideas people haven’t thought of right? That could spark a new revolution in the department it’s researching, at least that was my idea.

In: Technology

2 Answers

Anonymous 0 Comments

I think the reason is pretty simple: time.

You could use A.I. to learn new techniques or even invent new things, but, and that’s a big but, current A.I. doesn’t really reason the way we do.

For example, if you set up an A.I. to play a game it will, after enough sessions, learn proper techniques and even some techniques humans may never have thought of. In fact this is what happens all the time and is a serious problem when it comes to A.I. safety. (If an A.I. ever encounters a weakness in the system it’s deployed in which helps to reach their terminal goals, it – will – exploit them it. Be it a bug in a game, a flaw in a security system or even the naive humans who interact with it)

But that’s not the key part in this discussion, the key part is – after enough sessions.

You see, a game can be simulated millions of times in a short amount of time. You can even run them in parallel on several computer clusters.

But how would you do that with physical processes?

Randomly throwing an A.I. at batteries would produce a lot of attempts humans wouldn’t even consider. While that is the whole goal behind this, to find new approaches, most of these approaches would simply end in failure.

On top of that the A.I. would have to alter the machinery used as well, because otherwise you would have to build a battery building facility that’s capable of building and trying new stuff – fast – at the same time.

So you may think “can’t we just simulate it?”. Well yes and no. Of course we could simulate it, but the simulation is based on our knowledge of the universe, not the universe itself. Thus the A.I. would be bounded by what is known to us in some sense.

And even then. Maybe a completely different approach is necessary instead of a slight deviation. The probability of these major changes would have to be small though, otherwise the A.I. wouldn’t be better than pure randomness. (Basically the mutation rate can’t be too high or it wouldn’t find consistent patterns. Just like in evolution, if every generation would be dramatically different then we wouldn’t be able to build out lasting structures over generations and this can’t optimize)

Lower probability means more time is necessarily to eventually reach these “mutations”.

So in the end, I think it all boils down to time and practicality. It would take too long and cost too much to just randomly throw some A.I. at a task.

Anonymous 0 Comments

Ultimately, it’s because true A.I. doesn’t yet exist. The technology that people now call “Artificial Intelligence” is mostly just super-fast computers with access to a lot of data. They lack individual creativity. In essence, you’d still have to provide the data to the A.I., which means that the A.I. wouldn’t actually be developing the data, itself.