It’s likely that AI eyes are just as accurate as any other part of the rendered image. But if they look not quite right to you, it’s probably because our brains are highly evolved to watch the eyes of friend or foe, and even the tiniest thing out of place is instantly noticeable to us.
In other words AI cheeks and AI lips and AI earlobes are also probably just as kacked, but our brains aren’t wired to pay as much attention to that, so they get away with it.
[From a thread the other day asking a similar question about why faces seem “off” from AI.](https://old.reddit.com/r/explainlikeimfive/comments/x0vzt3/eli5_why_image_generator_ia_make_human_faces_all)
There’s two sides, a biological and a technological side.
Biology side is that humans are pretty good at detecting humans. If things are close, but not quite right, they can fall into what some describe as the “uncanny valley”. This hypothesis isn’t perfect and has some criticisms but the general idea is things that are very obviously not human=fine(robots, teddy bears), things that are clearly human= fine, things that are very close to but seem “off”=not fine, like some puppets, poorly formed AI photos, bad wax sculptures, poorly done animation etc.
Some theories are things that seem “off” to us could be diseased humans, corpses, or other humanoid species that might be a threat.
Eyes are no exception to this. If anything, one of the most important aspects. Watching peoples eyes is a big social and fighting marker.
The technological side is that some generation is imperfect still. Getting the exact proportions to be correct enough to not be very slightly off is hard. Even more so without a human to check “yep. That’s the right facial symmetry” like if it were being hand drawn or animated. It can hit the right boxes of “a nose is about the center of the face, 2 eyes above it about here” and so on but needs more exact parameters to find the balance between the exact same faces every time and being able to get just the right mix of “humans are different.. but not too different.”
I don’t know a lot about ai but I have a lot of experience with art, and I have a guess as to part of the problem. You could say humans also struggle with eyes. When drawing faces, eyes can be the hardest part for a lot of people. It’s not that an eye is difficult to draw on its own, it’s because even the slightest bit of wonkiness will be extremely distracting and take over the whole picture.
The thing is that we are not as objective as we think about how accurate an image is, we don’t process or evaluate all types of visual information equally. Because of the way our brains are wired, much more processing power goes towards noticing extremely subtle things about a human face. I could do a quick scribble of a tree in your back yard and you’d be like “yeah that’s what that tree looks like.” Someone highly skilled could labor over a portrait of your family member for weeks, and you’d instantly still see every little thing that’s not quite right. Eyes particularly show a lot of very nuanced things about a person and their emotions. We’re an incredibly social type of animal whose complex interactions rely on our ability to memorize hundreds of different faces. We simply don’t devote that amount of brain-space to things like telling dogs apart, being able to identify every different tree in a park, etc. But just millimeters difference between faces and we’re like “those people look nothing alike.” If eyes aren’t perfectly aligned we’ll see the person as cross-eyed, drunk or somehow compromised. If the lid drops a little we’ll see them as angry, if they’re too open they’re high.
We’re all expert eye and face readers, the computer may just not be up to our standards.
An AI system does not understand the idea of an eye the way you or I do. It knows how to recognize a face, and the features of a face, and it has recognized so many faces and eyes that it can kind of make a picture of one. But it doesn’t understand the structure of a face, the way eyes move and change, track objects, and express emotions.
Without that understanding, it’s very hard to make an image that includes eyes that look right and natural to people who do have that understanding.
The short answer is:
*Because an AI doesn’t actually know what an eye is.*
They don’t struggle to create eyes any more than they struggle to create other everyday objects. But while the human brain can look at most things that are almost-right and think “eh, close enough,” something which is almost a human eye is a sign of deformity at best and possibly danger and disease. Given the amount of information we communicate and receive based on subtle eye cues, we are massively overpowered at detecting small differences between how an eye “should” look and one that doesn’t stack up to scrutiny.
Latest Answers