Recently there has been a great deal of discussion about software that can converse with people and then present information or accomplish tasks in a natural way. Even in an advanced demo, when Google Assistant was demonstrated on stage at Google I/O 2016, it’s not like talking to a person. It is pretty similar to talking to the Enterprise Computer in Star Trek: The Next Generation, and subsequent shows. That’s also what Amazon’s Lab 126 strives for with Alexa. One of the programmed responses to, “What do you want to be when you grow up?” is” I want to be the computer from Star Trek!”
Google Assistant has the added advantage of working in demos, and not as a real-world product yet. I don’t doubt their technical prowess but it’s pretty easy to put an intelligent assistant in front of someone and find all the flaws.
How well these assistants will work for each individual will also depend on the context they are being used in, and the contexts that are covered by devices around them. Google demonstrated an understanding of this by talking about the different contexts that Google Assistant will operate in, including the home, car, and on the phone. Amazon hasn’t integrated their Alexa products in the way Google has outlined. It’s pretty strange that owners of an Amazon Echo (or any other Alexa enabled device, including the Fire TV or Fire TV Stick) can buy the Amazon Echo Dot as another Alexa device in the home, but the devices have no understanding that there is any relationship between them. When Dan Moren got his Dot, I asked him on Twitter how it dealt with being in the same room as his Echo. The answer is that they both take queries and respond. That’s not very intelligent.
A key part of Google Assistant’s demo was that it could order things for you through services Google has partnered with, like GrubHub or Instacart. Since this is a platform that permits third party developers, competitors like Postmates or Amazon Prime Now could theoretically integrate the same way and users could select the integration they want from the available options. (Amazon Prime Now seems like a stretch!)
Recently, on an episode of the Accidental Tech Podcast, host John Siracusa criticized Amazon for sticking you (and Casey) with Dominos, and said it all had to do with paid partnerships. That’s not the case. Alexa can’t order pizza. If you ask her to, she will look through your Amazon order history for pizza, and if she doesn’t find anything, she will inform you that she’s added “pizza” to your shopping list.
There’s no integration with Domino’s or suggestion of Domino’s. You get that when you add the Alexa Skill for Domino’s. A third party skill (like an app) that is added and configured through the Alexa app. When you add the skill you are prompted to login with a Domino’s “EasyOrder” account. Alexa is a thin layer between you and Domino’s, like grease, or dignity. The commands are “Domino’s” and not “Pizza” because no one owns the word pizza on the platform, Domino’s simply makes the only Skill for ordering pizza. Any Domino’s competitor can make an equivalent skill and they wouldn’t be impaired by Amazon any more or less than Domino’s. Domino’s just does weird tech stuff.
If these assistants take off, then having a Skill — or whatever the Google Assistant equivalent is — could be as valuable as a restaurant having a web site. And not just any web site, a modern, up-to-date website that loads. Perhaps we’ll see SquareSpace add these Skills to their plans? Or there might be some horrible Wix variant that offers the same? Viv, a Silicon Valley startup from the people that brought us Siri, seems determined to use a paid development platform which is, at best, nebulous.
Amazon makes a big deal out of making an Alexa Skill in 15 minutes with Node.js so maybe there will be a small market for web developers to add this to the list of services they provide when making websites? Certainly seems less cumbersome than an iOS or Android app.
However, users need to add these integrations ahead of time, not in a moment of pepperoni-pineapple-pizza pique. That can be as discouraging as saying you need to download an iOS, or Andorid, app for every restaurant. (A problem Google wants to solve with Instant Apps.) Viv seems to solve this by not really giving you any options.
Not all restaurants bother with online ordering infrastructure and instead rely on an intermediary company, like GrubHub or Postmates, filling that role. That can be beneficial for consumers because they can rely on a handful of Skills instead of one-off Skills. It will leave consumers wondering which Skill they use to order their pizza. Is that restaurant on GrubHub or Postmates? Are they on Uber Eats? Amazon has no universal search across their Skills to allow for comparison shopping for delivery. Google Assistant doesn’t appear to either. I say “appear” because Sundar told the “car” to order “curry” and that’s so abstract that it simply seems unlikely that it operates in that way.
What about Siri? It has no third party integrations aside from companies Apple selectively partners with. Yelp is a partner and handles almost all food-related queries. If you tell Siri, “Order Pizza” she provides a list of nearby restaurants with pizza on the menu and their Yelp star ratings. That’s it. Tap them to go through a maze of ordering things. Even if you have a favorite restaurant, and a usual order that you want to trigger, she’ll never understand any of it. Food is just a list of Yelp results. I would argue that Apple’s approach here is the worst. I hope that WWDC in June will bring some news about Siri integrations being offered so we can at least elevate Siri to the same level as the other assistants, as imperfect as they are.
And at that imperfect level we can start to wonder where it was we last ordered pizza from, and what it even was that we liked so very much. If we even have to consider the context, the hardware for the order, then this brave new world has so far to go.