Specifically, it should be asked why *the current* AI models have trouble with hands.
Give it a year, won’t be an issue. Give it five years and people won’t even remember it bring a problem. You’ll only remember because you posted it here.
I’m sure as they worked on the model, it would make mistakes on other things like faces or body proportions and the developers noticed quickly and applied modifications. Hands just slipped through the cracks. Probably because it is less noticeable than an eye out of place.
It’s a common misconception that all AI models are slowly learning from us using them. That’s not true for most AIs. There are three major categories for Machine learning:
**Supervised** – You have a dataset where you know the outcome. Like a list of dog and cat pictures. This is good for identifying fake news titles or the reddit bots that tell you how sarcastic a reddit post is.
**Unsupervised** – You have a dataset but you don’t have outcomes. Just a large set of unspecified data and the AI itself has to detect patterns and reproduce. **This is what image AIs use.** Also good for natural language stuff like chatGPT.
**Reinforcement** – This is the kind that learns as it goes. You set up parameters, when it succeeds, it is rewarded. Fails are punished. They tend to be inaccurate but improve over time. This is what people think of when they hear AI. It’s good for solving video games or finding solutions to things where the answer isn’t obvious. Example: youtube algorithm.
The only way to improve an unsupervised model is to change how it functions by adding layers to the way AI handles the data. Or get a stronger dataset to train it on. The latter is already optimal these days, so it’s almost exclusively improving from developers. They’ll add a hand layer of some kind and it’ll fix the issue.
Specifically, it should be asked why *the current* AI models have trouble with hands.
Give it a year, won’t be an issue. Give it five years and people won’t even remember it bring a problem. You’ll only remember because you posted it here.
I’m sure as they worked on the model, it would make mistakes on other things like faces or body proportions and the developers noticed quickly and applied modifications. Hands just slipped through the cracks. Probably because it is less noticeable than an eye out of place.
It’s a common misconception that all AI models are slowly learning from us using them. That’s not true for most AIs. There are three major categories for Machine learning:
**Supervised** – You have a dataset where you know the outcome. Like a list of dog and cat pictures. This is good for identifying fake news titles or the reddit bots that tell you how sarcastic a reddit post is.
**Unsupervised** – You have a dataset but you don’t have outcomes. Just a large set of unspecified data and the AI itself has to detect patterns and reproduce. **This is what image AIs use.** Also good for natural language stuff like chatGPT.
**Reinforcement** – This is the kind that learns as it goes. You set up parameters, when it succeeds, it is rewarded. Fails are punished. They tend to be inaccurate but improve over time. This is what people think of when they hear AI. It’s good for solving video games or finding solutions to things where the answer isn’t obvious. Example: youtube algorithm.
The only way to improve an unsupervised model is to change how it functions by adding layers to the way AI handles the data. Or get a stronger dataset to train it on. The latter is already optimal these days, so it’s almost exclusively improving from developers. They’ll add a hand layer of some kind and it’ll fix the issue.
Specifically, it should be asked why *the current* AI models have trouble with hands.
Give it a year, won’t be an issue. Give it five years and people won’t even remember it bring a problem. You’ll only remember because you posted it here.
I’m sure as they worked on the model, it would make mistakes on other things like faces or body proportions and the developers noticed quickly and applied modifications. Hands just slipped through the cracks. Probably because it is less noticeable than an eye out of place.
It’s a common misconception that all AI models are slowly learning from us using them. That’s not true for most AIs. There are three major categories for Machine learning:
**Supervised** – You have a dataset where you know the outcome. Like a list of dog and cat pictures. This is good for identifying fake news titles or the reddit bots that tell you how sarcastic a reddit post is.
**Unsupervised** – You have a dataset but you don’t have outcomes. Just a large set of unspecified data and the AI itself has to detect patterns and reproduce. **This is what image AIs use.** Also good for natural language stuff like chatGPT.
**Reinforcement** – This is the kind that learns as it goes. You set up parameters, when it succeeds, it is rewarded. Fails are punished. They tend to be inaccurate but improve over time. This is what people think of when they hear AI. It’s good for solving video games or finding solutions to things where the answer isn’t obvious. Example: youtube algorithm.
The only way to improve an unsupervised model is to change how it functions by adding layers to the way AI handles the data. Or get a stronger dataset to train it on. The latter is already optimal these days, so it’s almost exclusively improving from developers. They’ll add a hand layer of some kind and it’ll fix the issue.
An AI creates only piece by piece. It doesn’t know what the final piece will be. It only looks at a little bit of last pieces. Then they guess what’s the most probable thing to add. In case of fingers that’s usually.. another finger!
That’s why they make mistakes: they can’t see they already did a whole hand because their vision is shorter than that.
An AI creates only piece by piece. It doesn’t know what the final piece will be. It only looks at a little bit of last pieces. Then they guess what’s the most probable thing to add. In case of fingers that’s usually.. another finger!
That’s why they make mistakes: they can’t see they already did a whole hand because their vision is shorter than that.
Latest Answers