Hands costed extra to paint in commissioned portraits because they were difficult. It’s reasonable to assume that 1) it’s also difficult for the pattern recognition software to get right 2) the examples it learns from are poor due to having few master works to learn from 3) there are fewer to reference from in total than other body parts
Hands costed extra to paint in commissioned portraits because they were difficult. It’s reasonable to assume that 1) it’s also difficult for the pattern recognition software to get right 2) the examples it learns from are poor due to having few master works to learn from 3) there are fewer to reference from in total than other body parts
Speaking from an art perspective, humans are just very critical of hands. We have a very comprehensive idea of our hands (“to know something like the back of your hand”) and it’s a very complex weird shape compared to many other major features. Artists have a hard time emulating them as a result because they need to be perfect to be acceptable.
There was a youtube video that explained it so well… can’t remember if it was steve Mold or Sabine or someone else. Anyways, AIs that have difficulty with hands are those that are only trained on 2D pictures and lack a model of how things are in the real world in 3D.
Those AIs recognize a human face whether it is a front picture (2 eyes, 2 ears) or profile (1 eye, 1 ear), BUT they don’t know that the difference between the two pictures is a rotation on the 3D space.
If you follow subs like r/confusingperspective you’ll soon realize that there is plenty of pictures where people seem to have 3 legs or 3 hands. But there is less confusing perspective pictures about heads, maybe because when we take picture we make sure that head and body are prominent in the shots we take.
So, a lot of confusing perspective around hands and feet… confusing to us, humans who have knowledge of the 3D world. Now imagine you are a software that only analyzes pixels in 2D.
There was a youtube video that explained it so well… can’t remember if it was steve Mold or Sabine or someone else. Anyways, AIs that have difficulty with hands are those that are only trained on 2D pictures and lack a model of how things are in the real world in 3D.
Those AIs recognize a human face whether it is a front picture (2 eyes, 2 ears) or profile (1 eye, 1 ear), BUT they don’t know that the difference between the two pictures is a rotation on the 3D space.
If you follow subs like r/confusingperspective you’ll soon realize that there is plenty of pictures where people seem to have 3 legs or 3 hands. But there is less confusing perspective pictures about heads, maybe because when we take picture we make sure that head and body are prominent in the shots we take.
So, a lot of confusing perspective around hands and feet… confusing to us, humans who have knowledge of the 3D world. Now imagine you are a software that only analyzes pixels in 2D.
There was a youtube video that explained it so well… can’t remember if it was steve Mold or Sabine or someone else. Anyways, AIs that have difficulty with hands are those that are only trained on 2D pictures and lack a model of how things are in the real world in 3D.
Those AIs recognize a human face whether it is a front picture (2 eyes, 2 ears) or profile (1 eye, 1 ear), BUT they don’t know that the difference between the two pictures is a rotation on the 3D space.
If you follow subs like r/confusingperspective you’ll soon realize that there is plenty of pictures where people seem to have 3 legs or 3 hands. But there is less confusing perspective pictures about heads, maybe because when we take picture we make sure that head and body are prominent in the shots we take.
So, a lot of confusing perspective around hands and feet… confusing to us, humans who have knowledge of the 3D world. Now imagine you are a software that only analyzes pixels in 2D.
Latest Answers