Why are hands difficult for ai images?

535 views

Why are hands difficult for ai images?

In: 250

63 Answers

Anonymous 0 Comments

I have a machine that outputs a number.

20

10

5

16

Guess what the next number is? You know it outputs numbers but can’t predict it. You know it’s probably not -8282 or 282829. You don’t know why I chose those numbers. The same thing happens with AI. It sees long skinny tan sausages but doesn’t know how hands work. It doesn’t know how to draw hands.

P.S. The sequence was the collatz conjecture

Anonymous 0 Comments

I have a machine that outputs a number.

20

10

5

16

Guess what the next number is? You know it outputs numbers but can’t predict it. You know it’s probably not -8282 or 282829. You don’t know why I chose those numbers. The same thing happens with AI. It sees long skinny tan sausages but doesn’t know how hands work. It doesn’t know how to draw hands.

P.S. The sequence was the collatz conjecture

Anonymous 0 Comments

An AI creates only piece by piece. It doesn’t know what the final piece will be. It only looks at a little bit of last pieces. Then they guess what’s the most probable thing to add. In case of fingers that’s usually.. another finger!

That’s why they make mistakes: they can’t see they already did a whole hand because their vision is shorter than that.

Anonymous 0 Comments

An AI creates only piece by piece. It doesn’t know what the final piece will be. It only looks at a little bit of last pieces. Then they guess what’s the most probable thing to add. In case of fingers that’s usually.. another finger!

That’s why they make mistakes: they can’t see they already did a whole hand because their vision is shorter than that.

Anonymous 0 Comments

An AI creates only piece by piece. It doesn’t know what the final piece will be. It only looks at a little bit of last pieces. Then they guess what’s the most probable thing to add. In case of fingers that’s usually.. another finger!

That’s why they make mistakes: they can’t see they already did a whole hand because their vision is shorter than that.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

Specifically, it should be asked why *the current* AI models have trouble with hands.

Give it a year, won’t be an issue. Give it five years and people won’t even remember it bring a problem. You’ll only remember because you posted it here.

I’m sure as they worked on the model, it would make mistakes on other things like faces or body proportions and the developers noticed quickly and applied modifications. Hands just slipped through the cracks. Probably because it is less noticeable than an eye out of place.

It’s a common misconception that all AI models are slowly learning from us using them. That’s not true for most AIs. There are three major categories for Machine learning:

**Supervised** – You have a dataset where you know the outcome. Like a list of dog and cat pictures. This is good for identifying fake news titles or the reddit bots that tell you how sarcastic a reddit post is.

**Unsupervised** – You have a dataset but you don’t have outcomes. Just a large set of unspecified data and the AI itself has to detect patterns and reproduce. **This is what image AIs use.** Also good for natural language stuff like chatGPT.

**Reinforcement** – This is the kind that learns as it goes. You set up parameters, when it succeeds, it is rewarded. Fails are punished. They tend to be inaccurate but improve over time. This is what people think of when they hear AI. It’s good for solving video games or finding solutions to things where the answer isn’t obvious. Example: youtube algorithm.

The only way to improve an unsupervised model is to change how it functions by adding layers to the way AI handles the data. Or get a stronger dataset to train it on. The latter is already optimal these days, so it’s almost exclusively improving from developers. They’ll add a hand layer of some kind and it’ll fix the issue.

Anonymous 0 Comments

Specifically, it should be asked why *the current* AI models have trouble with hands.

Give it a year, won’t be an issue. Give it five years and people won’t even remember it bring a problem. You’ll only remember because you posted it here.

I’m sure as they worked on the model, it would make mistakes on other things like faces or body proportions and the developers noticed quickly and applied modifications. Hands just slipped through the cracks. Probably because it is less noticeable than an eye out of place.

It’s a common misconception that all AI models are slowly learning from us using them. That’s not true for most AIs. There are three major categories for Machine learning:

**Supervised** – You have a dataset where you know the outcome. Like a list of dog and cat pictures. This is good for identifying fake news titles or the reddit bots that tell you how sarcastic a reddit post is.

**Unsupervised** – You have a dataset but you don’t have outcomes. Just a large set of unspecified data and the AI itself has to detect patterns and reproduce. **This is what image AIs use.** Also good for natural language stuff like chatGPT.

**Reinforcement** – This is the kind that learns as it goes. You set up parameters, when it succeeds, it is rewarded. Fails are punished. They tend to be inaccurate but improve over time. This is what people think of when they hear AI. It’s good for solving video games or finding solutions to things where the answer isn’t obvious. Example: youtube algorithm.

The only way to improve an unsupervised model is to change how it functions by adding layers to the way AI handles the data. Or get a stronger dataset to train it on. The latter is already optimal these days, so it’s almost exclusively improving from developers. They’ll add a hand layer of some kind and it’ll fix the issue.