What does it mean when people say that not even the creators of large neural networks or AIs like ChatGPT or Midjourney fully understand how they work?

876 viewsOtherTechnology

I’ve heard this sentiment a fair bit in recent years and I’ve never been able to find a good answer online.

In: Technology

15 Answers

Anonymous 0 Comments

Imagine you have a fence that has 3 numbered holes in it. You want your dog to go through hole 2.

So, you train your dog by giving it a doggy snack after it goes through hole 2, and eventually it learns to do that.

Well, imagine now you have thousands and thousands of fences with numbered holes. Eventually, your dog starts going to the number you want it to, but you realize that it doesn’t even really need the treats. It has figured out some way to determine what hole you want it to go through with you only saying “go”.

You don’t really know why it’s happening, but we go with it. But then, your dog starts to use those same cues to do other things. Occasionally you have to whip out the doggy snacks, but you are adding more and more complex holes to go through- now, instead of a fence with 3 holes, your dog can pick 1 hole out of 1,000. And then 20,000. And you can’t figure it out, but it is what you want.

So you continue to test, and provide the occasional doggy snack, but overall the dog is doing most of the work based off of what it has found to work in the past. And it keeps getting better and better.

This is why we say we don’t know understand exactly why AI works- because it is making “decisions”, just like your dog in the story above, but we don’t know exactly what reinforced those decisions.

EDIT: NSTickels did a better job explaining this than I was able to above.

You are viewing 1 out of 15 answers, click here to view all answers.