One of the coolest tech things I saw in movies and couldn’t wait to see in the real world was retina scanning. I want to feel like James Bond or some other spy when I unlock my house or go into my office at work. I never thought biometrics could get any cooler than that, but Apple expanded my horizons the other day. The new iPhone X is equipped with state-of-the-art 3D IOT face recognition technology. Using a variety of sensors, they’re now able to create a “faceprint” instead of a thumbprint or retinal pattern.
They’re currently using this in the same way that several other companies have, primarily as a security measure. However, Apple has also developed some preliminary apps that showcase the potential of facial mapping. I believe the boundaries of this technology lie only in our imaginations. The Internet of Things (IoT) and wearables markets have been groaning for this technology, and now it’s up to designers like you to bring it to fruition.
Facial recognition sensor has been around for the last 50 years but is just now garnering mainstream attention. This is mainly because previous recognition systems could not match the accuracy of Apple’s new technology. Other companies like Samsung have dabbled in facial recognition sensor but did not truly move into 3D facial mapping. Apple has succeeded where others have failed with an impressive new sensor array that can map faces with great accuracy.
So, how did they do it? The iPhone X uses a variety of sensors to make a 3D map of your face. This multiple-sensor fusion is being used in the IoT, and even the automotive industry, in an effort to interpret data more accurately. In this case, Apple is using a proximity sensor, an ambient light sensor, a normal camera, an infrared dot projector, a flood illuminator, and an infrared camera to capture our features. The most important components are the dot projector, which illuminates our face with 30,000 dots, and the infrared camera which captures their positions. The result is a 30,000-point map of our face that is so accurate they say the chances of someone else unlocking it are one in a million. Of course, security is always a concern, so apparently, Apple even worked with Hollywood mask makers to test hacking attempts. It’s unknown if they worked or not, but an average everyday thief wouldn’t have access to those kinds of resources, so the majority of us are safe. If we want full security, they might have to add an iris scanner.
If you’re not a fan of Apple, you’ve probably already pointed out that companies like Samsung implemented this technology first. While that is somewhat true, there are some distinct differences in tech. Samsung’s system was powered by Google’s face recognition software, which identifies faces in 2D photos. It was fairly easy to breach that security, you just had to use a 2D photo. To make up for the poor facial detection security, they also implemented an iris scanner. However cool retinal scanning is, it’s not very practical. A thin range and fairly low resolution made that iris scanner good enough for its time, but not for the future. Apple’s system creates a full model of our faces and uses artificial intelligence to identify us even if we get a haircut or are wearing glasses.
Retinal scanning is now old news.
Enough about how the tech works. How does Apple plan to use this mapping? Its primary use seems to be security, though they have shown some more fun applications as well.
Another major announcement for the iPhone X was the lack of a home button that previously allowed users to unlock their phones with a thumbprint. That turned out to not be very secure, which is presumably why Apple created this system. The facial recognition phone system builds a model of your face, which it stores in the phone and compares to how you look when you try to unlock the phone. Apparently, unlocks will work on a sliding scale. This means Apple could let a face unlock your phone with a 50% comparison match, but require a 90% match for purchases.
Beyond security, they showed off a fun facial recognition phone application that will let emoticons mimic our faces. While that certainly is fun, I think facial recognition technology has more machine-learning advanced applications.
Gesture control is the next step after real-time facial mapping.
Facial recognition is not only important because it lets us use our face as a key, but also because systems like Face ID can read our expressions in real-time. That means it’s time to move into devices that can be controlled by facial recognition of gestures.
In the IoT space, this could mean appliances that can be operated with a nod and a wink instead of clunky controls. I recently wrote about IoT tech for senior citizens. What if their TV could read their lips for its commands instead of needing them to remember where the remote is and which buttons to push? People want IoT systems to operate efficiently and easily. They also want their systems to be personal. Imagine if Google Home or Amazon Echo could recognize a face and bring up user settings without the person having to announce their identity.
Wearables could also greatly benefit from this kind of system. No one wants to input a passcode on their smartwatch or other wearable devices, so user facial recognition in mobile phones would be great. Gesture control would be even more useful here than in the IoT. People want their wearables to be seamless, functional, and fashionable. It’s not very kosher to talk to your necklace in public. However, it might be slightly more acceptable to wink at your watch, or even to operate your gadgets with eye tracking. I know I’m getting slightly off-topic here, but aside from telecommunication, eye tracking is the end-all-be-all of seamless control. Facial recognition IoT technology with gesture control is the first step on that road, and you could be one of the few to take the first step.
The times are changing, and we often find Apple in the lead. Their facial recognition phone tech is revolutionary for many reasons, but 3D facial mapping is one of the more important ones. They’ve put together several sensors that allow our phones to accurately recognize our faces in real-time. So far this tech is being used for security and other minor applications. However, this is your chance to take the lead ahead of Apple. IoT and wearables sectors need gesture control. Real-time 3D facial mapping could be used to do just that and could expand to include things like hand motions.
When you need to access an easy-to-use PCB layout tool that includes everything needed to build high-quality manufacturable circuit boards, look no further than CircuitMaker. In addition to easy-to-use PCB design software, all CircuitMaker users have access to a personal workspace on the Altium 365 platform. You can upload and store your design data in the cloud, and you can easily view your projects via your web browser in a secure platform.
Start using CircuitMaker today and stay tuned for the new CircuitMaker Pro from Altium.