Pinterest’s Lens feature, first previewed last summer, was released this February. Lens, by using computer vision and machine learning, is able to “understand the objects you’re looking at and how they could be useful to you.”
Taking a photo of pasta, for example, returns relevant recipes and cooking suggestions to the user. Whereas other searches may return similar information — i.e. more pasta photos — Lens goes a step farther by providing related, yet still relevant results.
Aside from the benefit of not having to type out a search (perhaps this could help combat the rise in carpal tunnel?), it allows users to search for things when they may be incapable of describing them.
“Traditional search is really great when you have the right words,” said CTO Evan Sharp at a press event last year. However, it “can be tough… to translate information into a few words,” he conceded. In other words, Lens may be great for figuring out just what you want because it analyzes why you want it.
For example, you see a painting you like, but you’re not sure what exactly about it is appealing to you. You think back to your high school art history vocabulary but only come up with the word “baroque,” which you’re pretty sure you only thought of because it reminds you of Barack. Taking a picture of the painting with Lens could provide you with a flurry of similar paintings and prevent you from being ensnared in that art history vocab rabbit hole.
It could also let advertisers target users with greater precision. Pinterest currently allows companies to advertise their products through “Instant Ideas,” which presents users with related pins/objects based on their current search. However, at TechCrunch Disrupt NY 2017, Pinterest president Tim Kendall reported “the company would be releasing advertising products that will allow brands to match up ads based on image similarities powered by its visual search tools.”
That is, rather than advertising to consumers when they search for related products (i.e. via Instant Ideas), advertising over Lens would enable companies to appeal to consumers when they are first interested in a product and take a photo. For instance, you stop a man on the street to take a photo of his amazing shoes. Rather than asking verbally, then immediately forgetting the boutique shop in Norway where he purchased them on “the russ,” upon taking the picture you’re immediately presented with an advertisement for where to purchase them — so you do!
Lisa Gevelber, a marketing vice president for Google, calls these instances “micro-moments.” They happen “dozens of times a day … [when] people pick up their phones to look up information, research a product, or find a local restaurant or store.”
According to Gevelber the key to taking advantage of these moments has changed. “The advertising game is no longer about reach and frequency,” she asserts. “Now more than ever, intent is more important than identity and demographics, and immediacy is more important than brand loyalty.”
Lens seems to be offering this immediacy, and with continued precision. Although, in its initial release, Lens generated recommendations for particular clothing and accessories based on users’ photos, its new upgrade presents users with even more relevant material:
“The technology can now get really specific: finding you a red, high-waisted one-piece with side cutouts or a chunky, turquoise statement necklace. After it’s pinpointed the style you’re after, Lens goes even further to suggest other items you should pair with the look.”
Lens, however, is not alone in providing this efficient shopping/advertising platform. Craves and ASAP54 are apps that similarly recommend products based on photos consumers upload. These apps, however, exclusively recommend fashion-related products and do not appear to be as precise with their image recognition and recommendations. They also lack Pinterest’s 150 million monthly active users.
While Pinterest’s broader visual recognition software and millions of users may seem to provide few opportunities for competition, consumers’ fluctuating loyalty provides no guarantee of success. A Google survey recently reported “51 percent of smartphone owners have bought from a different company than they intended on the basis of information found online.”
And, thanks to Google’s MobileNets software, there could be even more competition for users’ attention. MobileNets is an open source software containing a “neural network of computation imaging for other programmers to incorporate into their apps.” This programming is specifically designed to run on mobile devices and is capable of recognizing “objects and people, along with … popular landmarks.”
Although it does not seem this technology has reached the specificity of Pinterest’s Lens — as Google currently lists “fine-grain classification…” as “among the possible uses of the program” (emphasis added) — since the programming is open source it could encourage more groups to adopt (and advance) visual recognition features.
While this could lead to greater competition for Pinterest, in an increasingly digitized world, where consumers’ attention span is roughly ten seconds, the more micro-moments you’re involved in certainly gives you an edge.
So, for now, Lens is having a moment.