According to a new survey by Pindrop, 84% of businesses expect to be using voice to serve customers in the next year. Big tech is rising to meet this demand, opening more and more of their voice platforms to 3rd parties.
If you’ve been paying attention to recent events in big-tech like Google I/O and WWDC, you may have picked up on a more specific trend in this area—a move towards greater discoverability on voice assistant platforms. Anyone with a mobile app will tell you that discoverability is a crucial and maddenning piece of the puzzle; it’s more important than ever to have an app for your brand, but if your users don’t know it’s there, they won’t use it.
As with any new channel, the best tool for discovery is early adoption. Brands that beat their competition to the market will enjoy a head start with users; as a given channel grows in public adoption, more brands join the race, and discoverability becomes increasingly complicated.
When smartphone adoption exploded in the early 2010s, more and more brands reached a point of seeing the value in building an app to meet and serve their customers there, and the earlier adopters had a significant leg up on those who waited until the app stores were saturated.
We’re in that sweet spot for early adopters of voice technology; the current moment is one with immense opportunity to be first among competitors to implement a voice strategy, but the window is closing. If you haven’t figured out how voice plays into your company’s ecosystem yet, that’s priority #1. We’ve put together a guide for finding ROI in voice to help you get there.
Once you’ve got a roadmap for voice, how do you let your users know? Here’s how to position yourself for discoverability on the biggest voice platforms right now:
Google’s Assistant uses a technique called “implicit invocation” to use context clues to determine whether your app can help with a particular voice query on the part of the user. Brands can program specific Action phrases which the Assistant will treat much like search terms to trigger an invocation. As with traditional Google Search, the secret sauce to recommendations is still largely a black box with no certain way to guarantee the Assistant will suggest your app, but they do offer best practices that should improve your chances.
Here’s an example of this flow from Google’s documentation:
Google also has great analytics tools to help you hone and refine your Actions phrases over time by seeing what users said to prompt your Action.
Again, your odds for being suggested by the Assistant are much higher at this early adoption stage, before the market becomes more crowded with options (and getting a head start on those analytics wouldn’t hurt either).
For Alexa’s (currently beta) CanFulfillIntentRequest interface, “developers need to tell the system what questions or commands their skill can answer, and what phrases a user might utter to activate their skill. Then, when Alexa hears similar phrases, the assistant will enable what it thinks is the most relevant skill and use it, instead of saying it is unable to help, or doesn’t understand.”
To maintain quality, the machine learning system will also look at a skill’s rating and engagement level before deciding which skill to use, so make sure your skill is well-targeted for compatible requests and optimal user satisfaction. It’s good to get to market early, but not with a poor product!
At WWDC this year, Apple announced the arrival of Shortcuts in iOS 12, a recommendation engine for apps that uses contextual clues in addition to voice invocation.
Apple is a bit of a special case as there are no standalone voice apps in the Apple ecosystem just yet; everything is still tied to a mobile app. It’s also worth noting that as of now, Shortcuts won’t be able to recommend actions in an app a user doesn’t already have downloaded. Still, users continue to expect more of their app’s functionality to be handled with voice, and Shortcuts opens up some exciting pathways for brands to bring their existing mobile apps into that space.
As Apple puts it, shortcuts can be “donated” by apps to the system when users perform certain actions they’re likely to want to repeat. The OS can then predict future use cases for that action and suggest it to the user. (Example: Siri prompts you to place an order for your usual morning coffee as you approach the coffee shop.)
While Shortcuts aren’t exclusive to voice, users can add their own custom phrases manually to trigger an action via Siri for a particular in-app action. The ability for Shortcuts to deep-link to specific in-app actions is a powerful tool for serving users when and where they need your product most. For more information, check out our guide on how best to implement Shortcuts.
Platform-specific opportunities aside, there’s one tool for voice skill discovery that’s likely available to you right now: your mobile app. Read our guide on best practices for identifying key moments in your mobile UX to let users know about your voice skill. We even made a pair of experimental libraries to demonstrate how your app could detect home assistants and other connected devices in order to make these recommendations more personalized.
As the mobile boom taught us, discoverability should be a priority from the first stages if of planning a brand’s digital product and rollout strategy. Companies who can be both early and findable in emerging platforms like voice will be positioned to stand out on top of what will inevitably soon be a crowded marketplace.
We’re helping all of our clients think through how voice can drive the future of their digital strategies. Reach out if you have questions!