What is it about Face ID that requires portrait orientation
March 18, 2025 1:48 PM Subscribe
Modern phones have accelerometers and gyroscopes in them that continuously collect data about their relative motion and orientation in space. When I show my mug to my phone, it is required that I hold the phone in portrait orientation and upright (ie. with the visible and IR camera package at the top of the device, face pointed at the screen). If the phone has the hardware and software needed to transform the visible/IR scan data appropriately (applying a rotation operation, at least), why is this fixed positioning still a requirement?
Are there cites of research into this, which suggest that applying operations to transform the scan data introduce errors that reduce accuracy, that sort of thing?
I'm curious to know if there an insider perspective from someone who might do this for a living, who can explain why this is still a necessity with current devices.
Not so much looking for supposition or guesses. (I've got loads of my own theories!) Thanks for answering.
Are there cites of research into this, which suggest that applying operations to transform the scan data introduce errors that reduce accuracy, that sort of thing?
I'm curious to know if there an insider perspective from someone who might do this for a living, who can explain why this is still a necessity with current devices.
Not so much looking for supposition or guesses. (I've got loads of my own theories!) Thanks for answering.
FaceID builds a depth map by using an infrared dot emitter to shine points of light on your face, and an infrared camera to read those dots. There are various ways depth can be inferred from this sensor data; one of them is “depth from stereo” which takes advantage of the fact that the emitter and the camera are in different locations, as in this presentation on Microsoft Kinect which used similar techniques.
In iPhone 12, the emitter and camera were 28mm (more than 1") apart. This means that, from the camera's reference frame, the dot emitter will be illuminating your face from a significantly different angle if you rotate your phone 90°. But in iPhone 13 and later, the emitter and camera are only 6mm (less than ¼") apart. This probably helped make it easier or more reliable for the newer devices to correct for rotation.
posted by mbrubeck at 2:11 PM on March 18 [11 favorites]
In iPhone 12, the emitter and camera were 28mm (more than 1") apart. This means that, from the camera's reference frame, the dot emitter will be illuminating your face from a significantly different angle if you rotate your phone 90°. But in iPhone 13 and later, the emitter and camera are only 6mm (less than ¼") apart. This probably helped make it easier or more reliable for the newer devices to correct for rotation.
posted by mbrubeck at 2:11 PM on March 18 [11 favorites]
I believe LED cells in displays are rectangular, _not_ square, which might have something to do with it?
posted by TimHare at 6:18 PM on March 18
posted by TimHare at 6:18 PM on March 18
« Older Charger and car calling adapter for iPhone 16 and... | How to not charge my Macbook Air Newer »
You are not logged in, either login or create an account to post comments
If Face ID isn't working on your iPhone or iPad Pro posted by zamboni at 1:56 PM on March 18 [2 favorites]