The preconceived notions people have about AI — and what they’re told before they use it — mold their experiences with these tools in ways researchers are beginning to unpack.
Why it matters: As AI seeps into medicine, news, politics, business and a host of other industries and services, human psychology gives the technology’s creators levers they can use to enhance users’ experiences — or manipulate them.
What they’re saying: “AI is only half of the human-AI interaction,” says Ruby Liu, a researcher at the MIT Media Lab.
- The technology’s developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned,” says Pattie Maes, who directs the MIT Media Lab’s Fluid Interfaces Group.
- “But we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don’t just depend on the AI and the quality of the AI. It depends on how the human responds to the AI,” she says.
What’s new: A pair of studies published this week looked at how much a person’s expectations about AI impacted their likelihood to trust it and take its advice.
A strong placebo effect works to shape what people think of a particular AI tool, one study revealed.
- Participants who were about to interact with a mental health chatbot were told the bot was caring, was manipulative or was neither and had no motive.
- After using the chatbot, which is based on OpenAI’s generative AI model GPT-3, most people primed to believe the AI was caring said it was. Participants who’d been told the AI had no motives said it didn’t. But they were all interacting with the same chatbot.
- Only 24% of the participants who were told the AI was trying to manipulate them into buying its service said they perceived it as malicious.
- That may be a reflection of humans’ positivity bias and that they may “want to evaluate [the AI] for themselves,” says Pat Pataranutaporn, a researcher at the MIT Media Lab and co-author of the new study published this week in Nature Machine Learning.
- Participants who were told the chatbot was benevolent also said they perceived it to be more trustworthy, empathetic and effective than participants primed to believe it was neutral or manipulative.
- The AI placebo effect has been described before, including in a study in which people playing a word puzzle game rated it better when told AI was adjusting its difficulty level (it wasn’t — there wasn’t an AI involved).
The intrigue: It wasn’t just people’s perceptions that were affected by their expectations.
- Analyzing the words in conversations people had with the chatbot, the researchers found those who were told the AI was caring had increasingly positive conversations with the chatbot, whereas the interaction with the AI became more negative with people who’d been told it was trying to manipulate them.
For some tasks, AI is perceived to be more objective and trustworthy — a perception that may cause people to prefer an algorithm’s advice.
- In another study published this week in Scientific Reports, researchers found that preference can lead people to inherit an AI’s errors.
- Psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, found that participants asked to perform a simulated medical diagnosis task with the help of an AI followed the AI’s advice, even when it was mistaken — and kept making those mistakes even after the AI was taken away.
- “It is going to be very important that humans working with AI have not only the knowledge of how AI works … but also the time to oppose the advice of the AI — and the motivation to do it,” Matute says.
Yes, but: Both studies looked at one-off interactions between people and AI, and it’s unclear whether using a system day in and day out will change the effect the researchers describe.
The big picture: How people are introduced to AI and how it is depicted in pop culture, marketed and branded can be powerful determiners of how AI is adopted and ultimately valued, researchers said.
- In previous work, the MIT Media Lab team showed that if someone has an “AI-generated virtual instructor” that looks like someone they admire, they are more motivated to learn and more likely to say the AI is a good teacher (even though their test scores didn’t necessarily improve).
- Meta last month announced it was launching AI characters played by celebrities — like tennis star Naomi Osaka as an “anime-obsessed Sailor Senshi in training” and Tom Brady as a “wisecracking sports debater who pulls no punches.”
- “There are just a lot of implications that come with the interface of an AI — how it’s portrayed, how it interacts with you, what it looks like, how it talks to you, what voice it has, what language it uses,” Maes says.
The placebo effect will likely be a “big challenge in the future,” says Thomas Kosch, who studies human-AI interaction at Humboldt University in Berlin. For example, someone might be more careless when they think an AI is helping them drive a car, he says. His own work also shows people take more risks when they think they are supported by an AI.
What to watch: The studies point to the possible power of priming people to have lower expectations of AI — but maybe only so far.
- A practical lesson is “we should err on the side of portraying these systems and talking about these systems as not completely correct or accurate … so that people come with an attitude of ‘I’m going to make up my own mind about this system,'” Maes says.
Source: Placebo effect shapes how we see AI
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft