There are all sorts of facial recognition algorithms out there, but they mostly all rely on certain key aspects of your facial geometry. A common one is the distance between your eyes, or the distance from one eye to the bridge of your nose. Others include the width of your nose, or the distance from your upper eyelid to your lower eyelid. The algorithms are designed to be tolerant of things like glasses or changes in facial hair. Interestingly, there are [makeup and hair styles designed to defeat facial recognition](https://cvdazzle.com/). You’ll notice that they tend to block an eye or the nose, add bumps to the face, and/or make the face seriously asymmetric.
You can liken Face ID to fingerprint reading. The methodology is measuring distances between points of interest. On a fingerprint it’s the distance between identifiable points arranged in a pattern. The face ID is similar but it projects a point cloud on your face and measures distances too. This time it’s things like iris width, bridge of the nose to the mouth distance, width of nose, cheek bones, etc
The face ID has enough points of measure that sunglasses dont obstruct all of them. Additionally, the camera can still see through them some of the time. A beard probably doesn’t cover much they’re interested in.
When Apple announced FaceID with the iPhone X, they explained that it would track changes to your face over time so that something like growing a beard would not affect performance of the feature.
I’m guessing that they also trained the system to understand what glasses look like and ignore them when scanning.
Latest Answers