We Need To Talk About Alexa
Written by Lucy Black   
Monday, 07 October 2019

and all the other smart speakers. A new study attempts to characterize our relationship with these comparative newcomers to the tech ecosystem.


Smart speakers are the biggest disruption in the use of tech since the smart phone. In a very short time voice control has moved from being something we saw in SciFi to an everyday event. We now ask our devices to play music, switch things on and off and even tell us jokes, but we know very little about how humans are adapting to their new friends. Indeed are they friends or just tech and how do we cope when they go wrong? Mirzel Avdic and Jo Vermeulen  of Aarhus University, Denmark have started to look at the issue.

"we decided to conduct a study to investigate the following: (1) what are users’ mental model of the smart speaker, (2) users addressing their smart speaker, (3) users’ recovery from mistakes and system breakdowns, and (4) users’ interactions with smart speakers in shared contexts."

Unsuprisingly tech savvy users understood that their speakers were mostly dumb and the clever processing was all done via the cloud. I think this misses out on the importance of the "wake word" - the phrase that the speaker is programmed to recognize without any help. Having a single way of activating the device in a relatively unintelligent manner is troublesome, especially if you happen to be called Alexa. How often do users really want to say "I wasn't talking to your"?

Another fairly obvious conclusion is that users value the handsfree operation. Having to whisper to the device, or use alternative controls, when other people were asleep was also an issue. Another problem was discovering what capabilities were on offer. There is no voice menu.

When it came to recovering from problems the strategies that have evolved are basically to get closer, repeat the instruction and eventually to go to a smart phone and achieve the result without voice. I'd like to add to this the way voice assistants tend to "shape" the interaction. You quickly discover that adding extra filler words often helps to make the key word easier to recognize. The users who fit in with Alexa best quickly learn not to repeat a command that isn't working, but use slight modifications of it. 

Interestingly, users were very happy about allowing visitors access to their smart speaker - much more so than sharing other gadets. This too seems perfectly understandable.

One of the main conclusions of the study were that smart speakers should have other ways of interacting with users - buttons or preferably a touch screen.

As our smart speakers slowly get smarter many of these problems will fade. Eventually it will be like asking a human to do something or find something out. At the moment, however, we could do with better and more secure ways of interacting. It is fun to have the lights in a room controlled by a smart speaker, but when you hear "I'm sorry I'm having trouble understanding right now" as you stumble into a pitch-black room with your hands full isn't quite as much fun.


More Information

Studying Breakdowns in Interactions with Smart Speakers

Related Articles

Alexa For Developers

The State Of Voice As UI

Microsoft Adds Conversational AI Agents

Over 100 Million Alexa Devices Sold

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook or Linkedin.



Deep Learning Restores Time-Ravaged Photos

Researchers have devised a novel deep learning approach to repairing the damage suffered by old photographic prints. The project is open source and a PyTorch implementation is downloadable from GitHub [ ... ]

Google Launches Learn For Developers

Google has announced the launch of Learn, described as a new one-stop destination for developers to achieve the knowledge and skills needed to develop software with Google's technology.

More News





or email your comment to: comments@i-programmer.info


Last Updated ( Thursday, 31 October 2019 )