The Atlantic: Computers Are Getting Better Than Humans at Facial Recognition
The influence of algorithms is nothing new. They shape a lot of what we perceive online. When used in wearable devices that shape our perceptions of the world around us, algorithms can have a profound impact. For example, a device that reads facial expressions to assess moods could affect how you approach your boss, or whether you think your significant other is mad at you. Or a device that hides stressful visual stimuli could remove an annoying ad on your subway commute, but it could just as easily remove a helpful PSA. As wearables do more to reshape our realities, the way we perceive the world will become increasingly shaped by the algorithms that govern those devices.Not only does that make our perceptions increasingly dependent on algorithms, but it makes our perceptions increasingly dependent on the people who make those algorithms. My hearing aids use increasingly complex algorithms. Although a trained audiologist can adjust the devices to my unique needs, as the algorithms become more complex, the opportunities for customization become paradoxically more limited. My hearing aids, for example, have 20 independently adjustable audio channels, and while my audiologist can adjust each one, he usually adjusts them in groups of 6 or 7 channels at a time. If consumer wearables don’t offer significant opportunities for customization (or provide access to an expert who can help customize the experience), it will leave users even more dependent on the default algorithms.
3. Wearables will fail invisibly.
The more we rely on wearables to interpret the outside world for us, it will become critical for devices to communicate failures. And the more seamless the experience of wearables becomes, the harder it is to know when it isn’t working as intended.In some cases failures are obvious: If my hearing aid doesn’t turn on, then I can take steps to address the issue. However, in other cases failure is less obvious. At a meeting a few months ago, I was sitting near a loud air conditioner that made it difficult to hear the people across the table. I knew my hearing aids should reduce the background noise, but because the aids produce sounds using complex, personalized algorithms, I had no way of knowing whether the hearing aids were malfunctioning or whether the air conditioner was just too loud. The more personalized the device and the subjective experience it creates, the harder it is to know when things are going wrong.Future wearables will likely do incredibly complex things, and when the results are unexpected we may trust that the device knows best, privy to secret knowledge or power. But sometimes it will just be wrong. Identifying whether what we see or hear is the proper functioning, the outcome of an inscrutable algorithm, or simply a failure, may be quite challenging.
4. Wearables will record everything.
If failures are hard to detect, the solution is just as challenging: pervasive recording. The more the behavior of wearables is dependent on context and inputs, the more that troubleshooting requires data collection. After a plane crash, one of the first things that investigators look for is the "black box" flight data recorder, because it is often impossible to reconstruct what went wrong without also knowing things like the airspeed, the throttle, and the position of the flaps and gears. Troubleshooting wearables presents many of the same challenges.When I go to my audiologist, I can tell him that I didn’t think my hearing aids worked correctly at a noisy restaurant a few weeks ago. But without a record of the the noisy environment and the sound I heard from the aids, he can only guess about what happened. For the user, this trial-and-error form of troubleshooting can be frustrating, especially when it involves multiple trips to the audiologist for readjustments.Up until recently, the idea of storing gigabytes of data on a hearing aid would have been absurd. The devices didn’t have sufficient storage and persistent recording would sap the already-limited battery life. But the newest hearing aids now record certain types of data for diagnostic purposes. If I raise or lower the volume on the aids, the device records information about the new setting and lets my audiologist download the data at a later date. As data storage becomes cheaper and power efficiency improves, the collection of additional data could help the device be better fitted to my needs and enable better troubleshooting.The same drive toward additional data collection will happen in consumer wearables as well. How do you know if your mood-identifying glasses are working correctly? That requires knowing both the input (the image of someone’s face or their voice) and the output (the identified mood). It would be easy to store still images of faces for diagnostic purposes and troubleshooting, and just as easy to upload them to the device manufacturer to help improve their algorithms.In some cases, storage may not even be necessary as consumer wearables might transmit everything in real time to centralized servers for processing. With limited processing power and battery life, wearables might offload computationally intensive processing to centralized computers. This is what Apple does (or used to do) with Siri, where at least some analysis of your voice request is processed on remote Nuance servers. Although this enables more complex analysis than small wearables might be able to do otherwise, it also creates greater privacy concerns as more data is transmitted to, stored by, and kept by others.* * *
When I got my first pair of hearing aids, they were large and analog, and my audiologist made adjustments to the sound outputs using a small screwdriver. My hearing aids today are so small they can fit invisibly in the ear canal, and my audiologist adjusts them wirelessly on computer. The pace of progress has been astounding, and I have no doubt that progress has changed my life for the better in significant and concrete ways.The price of progress, however, is complexity. Older hearing aids had limited customization, altered sounds in very basic and predictable ways, failed in obvious ways, and didn’t collect data. Now things are different. The endless customization available in new aids creates more opportunities for mistakes. The complex algorithms make it harder to diagnose problems. The total substitution of experience stifles attempts to identify errors. And increasing data collection means hearing aids may soon have to grapple with thorny issues of privacy.The same holds true for consumer wearables. If they follow the path of hearing aids, future generations of wearables will be more immersive, more complex, more difficult to troubleshoot, and more pervasive in their data collection. As long as we see wearables as toys or luxury goods, it is easy to write off these challenges. But there is a real opportunity for wearables to improve the lives of many in substantial ways just as they’ve improved my life since 1986. To realize those improvements, we cannot ignore these trends, and we must take wearables seriously as the indispensable tools they will soon become.This article available online here.