There’s been a lot of buzz lately around facial recognition and what exactly that means to consumers. When the average consumer thinks of facial recognition technology, a Jason Bourne-esque scenario probably comes to mind, with high-tech devices constantly scanning crowds, identifying individuals, accessing vast databases, and using that recognition information to some nefarious end. Present use of facial recognition technology is far more benign and useful to consumers than some would have you think. The emergence of new photo sharing and storage apps like Google Photos and Facebook’s Moments demonstrates that current facial recognition tools are better suited to helping users categorize and share their photos, rather than populating an ominous law enforcement or commercial database. As such innocuous uses of facial recognition become more and more common, consumers’ comfort with and understanding of the technology will grow correspondingly. The next step is ensuring that privacy-focused regulators and legislators are able to develop frameworks that enable this growth and adapt as consumer expectations and facial recognition technology change.
The Google Photos app was recently released as a tool to streamline the photo backup and sharing process. It boasts free and unlimited photo storage and aims not only to keep similar events grouped together, but also to scan, identify, and easily share different photo subjects with others. Part of the identification process involves scanning and recognizing people, places, and things. However, the app doesn’t see faces in the same way humans do. Humans see faces and recognize specific people, whereas a computer scans an image and recognizes colors, patterns, and shapes. This process is called facial detection. Facial recognition is one step above facial detection in that it compares “known faces” or patterns to newly uploaded faces to see if there is a probable match.
Facebook’s Moments utilizes similar technology. Moments is a recent app that was launched to help organize and share photos with frequently photographed friends. When photos are uploaded to the app, Facebook’s technology scans for a face in the photo. If there is a face, then the app compares features and patterns of your profile picture and other tagged pictures to the newly uploaded photo. Like Google Photos, Moments looks for unique characteristics and patterns, such as the shape of your face or distance between facial features, to recognize and connect profiles. In essence, both apps use algorithms to create a system of recognizable models or templates for comparison, rather than referring to an on-file photograph. This pattern of connecting and “recognizing” faces creates tag suggestions of the subjects of the uploaded photo—but only if the photo uploader and picture subjects are friends.
Both apps are a win for consumers. The idea is that quick recognition helps streamline the social sharing process for group events. Whereas people once might have followed a social event like a wedding or reunion with an endless email chain of shared pictures (or exchanged CDs or flash drives), users can instead allow Google Photos or Moments to group and categorize photos into events based on the subjects of the photos and additional metadata. Given that social networking sites and mobile devices are already the chief digital photo curation tools used by consumers, extending their existing functions via facial recognition-based apps seems like the sort of harmless, pro-consumer innovation that ought to be encouraged.