Amazon is making a bunch of changes to the Alexa user experience, all with the same idea in mind: making the virtual assistant easier to use. Most notable is the change in how Alexa handles routines, which developers can now create and recommend to users instead of requiring you to build automation manually. Alexa is also starting to coexist with other manufacturers’ assistants, and Amazon is working to make sure the most important commands — like “Stop!” – Work no matter what alert word you use.
Amazon made these announcements during its Alexa Live developer event, where the company announced a slew of other new Alexa features geared mostly toward developers. They can add shopping to skills, support Matter and other smart home systems more easily, connect with a simpler setup flow, and understand more about their surroundings.
But Amazon knows that none of Alexa’s flashy new features really matter if you can’t find them or know how to use them. Rather than creating new user interfaces or smart voice menus, the Alexa team is more inclined to let the system do the work for you. “We want to make automation and proactiveness available to everyone who interacts with Alexa and Alexa-connected devices because it’s so much fun,” says Aaron Robinson, Vice President, Alexa Team.
The change to routines is the most obvious example among the new ads. Users can still configure their own routines – “When I say I’m leaving, make sure the stove is off and all the lights are off”, that sort of thing – but developers can now create routines for their skills and present them to users based on their activities. “For example, Jaguar Land Rover uses the Alexa Routines Kit to make a routine they call ‘Goodnight,’ which will lock the car and remind customers of the charge or fuel level, and then also turn on Guardian mode,” says Robinson. It’s the kind of thing that a lot of people would enjoy but few would do the work to create for themselves, but now they just have to run it.
Robinson says that people who use routines are some of the most consistent and consistent Alexa users and that he wants these people to continue to get the handles they need to build the weirdest and weirdest automations. “But we also realize that not everyone will take that step,” he says. As Alexa continues to struggle to keep users engaged, adding some proactivity to routines could make it more useful to more people.
Voice assistants have always presented a difficult UI issue because they don’t present a series of buttons or icons and instead are just a blank panel that you can talk to or shout at. Over time, the Alexa team got rid of that friction by trying to make it impossible to say the wrong thing. That’s part of the thinking behind its multi-assistant support, which allows developers to place their virtual assistant next to Alexa within the device. (Amazon’s newest partner is Skullcandy, so you can talk to the headphones by either saying “Alexa” or “Hey Skullcandy.”)
Along the same lines, Amazon is also working on a feature called Universal Commands that makes it so an Alexa-powered device can do a few important things no matter what wakeup word you used. For example: You can say “Hey Skullcandy, set a timer for 10 minutes” and the Skullcandy assistant can’t do that, but Alexa can, so Alexa can handle it automatically. Robinson called timers and call rejection similar important things that any Alexa-enabled device should be able to handle even if you don’t interact with Alexa. Robinson says the feature will roll out over the next year.
Developers will have to implement and take advantage of these features so they can catch up, of course. Amazon is trying hard to motivate them to do just that: It is changing its revenue-sharing agreement so that developers keep 80 percent of their revenue instead of 70 percent and it is launching a Skills Developer Acceleration Program, which Robinson says will “reward developers for taking actions we know lead to Creating a high-quality, engaging skill based on our complete history.” And it’s a symbol: Amazon pays developers to get better skills.
If Amazon could make all this work, it would have taken a step toward solving one of the big problems with voice assistants: It’s hard to know what they can do, so most users are turning to music, lights, and timers, which means there’s no reason for developers to invest in The platform, which means that there is nothing for users to do. By making the platform more robust at the same time and by making the platform do more work for users, Amazon can get this wheel to move in the other direction. And you don’t even have to help.