|Perceptions of Alexa|
|Written by Janet Swift|
|Friday, 05 February 2021|
In the space of seven years Amazon's conversational agent Alexa has come to have an almost ubiquitous presence. It is now the subject of a growing body of research looking into its impact on children who now grow up surrounded by conversational agents.
In this study Jessica Van Brummelen, Viktoriya Tabunshchyk and Tommy Heng researchers at MIT, set out to investigate how the perceptions and conceptions of Amazon Alexa of 6-12th grade students changed as the result of learning to program their own conversational agents.
The researchers set out to the answer the question:
How does building Alexa skills and learning about conversational AI in a remote workshop affect students’ perceptions and conceptions of AI, conversational AI, and Alexa?
They investigated how middle and high school students' perceptions of Alexa changed as a result of participation in week-long AI education workshops in which they learned to program their own conversational agents using the App Inventor, the block-structured programming language created at Google by Mitch Resnick and still under development at at the MIT Lifelong Kindergarten Lab
In their recent paper Van Brummelen et al. report on the workshops' influence on student perceptions of Alexa's intelligence, friendliness, aliveness, safeness, trustworthiness, human-likeness, and feelings of closeness. They asked participants to complete questionnaires about their perceptions of Alexa on 7-point Likert scales on the second day of the workshop and again on the final day with the results shown below:
Prior to the study, the researchers had hypothesized that students would feel Alexa was less intelligent after learning how to program it, as they would better understand how it works. In fact the results reveal that students felt Alexa was more intelligent after the programming experience. The researchers speculate:
Perhaps by successfully learning fundamental AI literacy concepts, students realized Alexa was more complex than they initially thought and thus perceived it to be more “intelligent”.
They also hypothesized that students would personify Alexa less after understanding the logic behind how it works, and therefore rate its “aliveness”, “human-likeness”, “friendliness”, and their feelings of closeness to it as less than prior to the intervention. Again the results differed from their expectations. There was no significant evidence for any change, except that students felt closer to Alexa at the end of the workshop.
The study also reveled strong correlations between students' perceptions of Alexa's friendliness and trustworthiness, and safeness and trustworthiness. This led the researchers to comment:
Although these correlations do not necessitate causation, it is important to consider the implications of potential causation when designing CAs. For instance, if a CA was purposefully designed to seem friendly and intelligent, users may associate this with trustworthiness and safeness, despite the potential for the CA to provide incorrect information (intentionally or not). Nevertheless, this could also provide positive opportunities, including how students may learn better if they feel a pedagogical agent is friendly and intelligent, and thus also trustworthy and safe.
Jessica Van Brummelen, Viktoriya Tabunshchyk, Tommy Heng
or email your comment to: email@example.com
|Last Updated ( Friday, 05 February 2021 )|