Mobile-first? Passé, says Google. The future is being driven by artificial intelligence.
“Mobile made us reimagine every product we were working on,” Google CEO Sundar Pichai said Wednesday at Google’s I/O developer conference in Mountain View, Calif. “Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems.”
The world we’ll soon be living in, according to Google, is one in which virtual assistants are clever enough to proactively suggest that you leave early to beat unexpected traffic so you make it to your daughter’s soccer game on time. In this world, Google Home recognizes your voice so that when you say, “Call mom,” it gets your mother on the phone and not your mother-in-law.
Sounds seamlessly utopian with the potential for dystopian undertones. But does advertising have a role to play there?
It can, but not in its current form, said Paulo Michels, VP of engineering at Grey-owned app development agency ArcTouch.
“We see assistants like Google and Alexa act as a concierge, one that can offer users timely information based on context from data via the web and applications,” Michels said. “In terms of ad monetization, CMOs will have to adapt, and that’s already happening.”
In other words, brands must create a two-way contextual experience rather than a one-way marketing communication – like conversing with Google Assistant in natural language to place an order at a restaurant, rather than fire up a branded app.
All of that subsequent communication – checking order status, re-ordering or looking at past orders – would happen in Google Assistant.
This type of interaction is representative of the shift from mobile-first to AI-first. “We’re at a new industry inflection point,” said Tom Edwards, chief digital officer at Epsilon. “This means that the role the app plays will become more of a middle layer versus a primary destination.”
And voice and vision will become the de-facto entry points.
Google Lens, announced at I/O, is Google’s attempt to make the visual world searchable. Lens recognizes exactly what objects are in a photo. If users point their camera at a storefront for example, Lens will identify it and provide other helpful info, such as opening hours, reviews, ratings and phone number.
AdExchanger Daily
Get our editors’ roundup delivered to your inbox every weekday.
Daily Roundup
The integration of computer vision with Google Assistant and the launch of Lens will “actually open up new monetization opportunities as physical locations and objects become the connection point into contextual and predictive advertising,” Edwards said.
Creative brands could use Lens and Assistant to anchor a narrative around a particular experience.
“For example, you see a marquee with a band name, [and] Google Lens recognizes the band, recommends music and potentially a brand integration that has a halo campaign tied to music and also shows a connection to a transaction,” Edwards said. “Ads could be integrated based on computer vision mapped to contextual inventory.”
Although Assistant is already available on more than 100 million Android devices (iPhone availability is coming soon), it’s still early days for super-seamless Assistant-driven life. But that is the world Google, Amazon, Facebook, et al. are looking to build.
“We definitely see a future with, as Google put it, much more ‘fluid experiences,’” Michels said, “There will be tighter integrations between Android Apps and the Google Assistant, with the Assistant becoming an increasingly important frontend to the digital world.”
In addition to Lens and payments through Assistant, Google made an assortment of other announcements at I/O, including:
Third-party attribution for apps: A new App Attribution Partners program will allow developers to more easily use third parties to measure Google ad performance. Partners include TUNE, Apsalar, adjust, AppsFlyer, Kochava, Adways and CyberZ.
A makeover for AdMob: Google introduced a redesign of its mobile ad network, AdMob, including an easier-to-user interface and more detailed user insights, such as time spent in app, in-app purchases and ad revenue data, all in one place when developers link their AdMob account to their Firebase account.
New features for Firebase: Firebase is Google’s open-source cloud-based platform for helping developers more easily build apps. Google will make Crashlytics, which came along with its January acquisition of Twitter’s dev toolkit, Fabric, the primary crash reporting product in Firebase. Google is also launching free performance monitoring tools in beta within Firebase, as well as improved analytics.
Android Instant Apps for all: Google launched Android Instant Apps in beta last year, which gives users the ability to stream apps without having to install them, but there hasn’t been much news since. Now Google is making Instant Apps SDK available to all developers, which means we should see more adoption. Early users include Vimeo and real estate app HotPads.
New ads in Google Play: Developers will now be able to place ads on the home and app listing pages in Google Play through Universal App Campaigns, a promotional feature that allows developers to scale their campaign reach across multiple Google properties, including Play, Search, YouTube, Gmail and the Display Network. Google also rolled out expanded targeting features to help developers reach specific users based on parameters such as cost-per-acquisition and target return on ad spend.