|Amazon Introduces Alexa Conversations|
|Written by Kay Ewbank|
|Thursday, 13 June 2019|
Amazon has released a preview of Alexa Conversations as a new AI-driven approach to natural dialogs through the Alexa Skills Kit.
The deep learning-based model lets developers create natural voice experiences on Alexa. This was already possible, but the claim is that the new approach requires less effort, fewer lines of code, and less training data than before. The new model can be used to create in a single skill. and the team at Amazon says that future releases will let you incorporate multiple skills into a single conversation.
At the moment, people interact with Alexa by making one-off requests such as "Alexa, the weather?”. If you want to do something more complex, you have to make multiple requests - ask about the weather, then ask what's on at the cinema, then ask Alexa to book a taxi. If you're doing something such as booking a table for four at 8pm, and a taxi to get there, you need to tell Alexa information about things such as times and numbers for each request. Demonstrating the model in use at Amazon’s re:MARS conference in Las Vegas, Rohit Prasad, Alexa vice president and head scientist, said:
“We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.”
The demo showed planning a night out, with Alexa interpreting ambiguous references, and keeping information between skills - remembering the location of the movie theater in order to find close-by restaurants.
The demonstration was based on an as yet unreleased version of Alexa Conversations with linked responses to customers’ questions and requests. Behind the scenes, the outcome of a round of dialog is a vector that represents the context and semantic content of the conversation. As the vector is updated, a conversational model based on deep learning generates a list of candidate actions that Alexa could take in response. Where appropriate, this system fills in the candidate actions with specific values, such as movie times or restaurant names. The system then scores the actions and executes the one with the highest score.
This new approach to multi-turn dialog also includes a separate AI module whose task is to decide when to switch between different skills — which questions to pass to the restaurant search skill, for instance, and which to pass to the movie skill.
The switches can be reactive (responding to customer requests), or proactive - suggesting getting a taxi when the user has booked the movie tickets.
The current preview of Alexa Conversations consists of an AI-driven dialog manager with a dialog simulation engine that automatically generates synthetic training data. To use it, developers provide APIs that provide access to their skills’ functionality, along with a list of the entities that the APIs can take as inputs, such as restaurant names or movie times; and some sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology uses this information to generate dialog flows and variations, learning the large number of paths that the dialogs could take.
The potential dialog paths are modeled using a deep, recurrent neural network. At runtime this neural network takes the entire session’s dialog history into account and predicts the optimal next action or step in the dialog, improving accuracy and reducing your design and code efforts.
The developer preview is available in the US now. Participants will have access to Alexa Conversations for AI-based dialog management in existing or new skills and become eligible for early access to future versions with cross-topic capabilities. If you're interested in taking part you need to take a survey, and Amazon will notify you if you've been accepted.
or email your comment to: firstname.lastname@example.org
|Last Updated ( Thursday, 13 June 2019 )|