Research finds interaction and privacy control important when seeing AI doctors
Research finds interaction and privacy control important when seeing AI doctors
The increase of telehealth medical consultations and AI-generated doctor appointments during COVID-19 ushered in a new way of receiving health care. Four years later, researchers have found a growing acceptance between healthcare providers and their patients.
Mengqi “Maggie” Liao, assistant professor at Grady College and one of the researchers involved with the study, was surprised at some of the results including the fact that patients were more accepting of AI doctors because of a perceived effort by the AI doctor to be more social with the patients.
“If we want to promote more acceptance of AI doctors, this is good news,” said Liao. It’s easy to implement and easy for an AI doctor to remember you.”
The study, “When an AI Doctor Gets Personal: The Effects of Social and Medical Individuation in Encounters With Human and AI Doctors,” was recently published in Communication Research.
The research studied the effects of human doctors, AI doctors and AI-assisted doctors, who are human doctors assisted by AI technology. Researchers wanted to see if patients and doctors had the same interaction with the different doctors. Questions were asked about the relationship the patients had with each doctor in terms of the perceived effort the patient thought was made by the doctor and the perception of individual care by the patient. The study examined the different comfort levels patients felt and whether they were being viewed as individuals through interaction with the different doctors. The relationships the study patients had were also distinguished by how much effort the doctors exhibited and also the comfort level the patients had in sharing personal information.
The research found that the AI doctors were perceived by the patients as making a high effort to form relationships, and this equated to high satisfaction by the patients. In other words, the patients appreciated when the AI doctors recalled social information that identified them as individuals. This research also tested a common framework in AI known as Computers Are Social Actors, or CASA. This framework attributes human behaviors to computers. For example, if someone is interacting with a human, they are typically polite and pick up on social cues like saying ‘hello.’ Decades of research found that the same social rules can also be applied when we interact with interactive machines, as if machines are another intelligent social beings. This tendency applies to AI doctor which was investigated in the current study. The researchers wanted to test if the effects of interpersonal knowledge, which is common in human interactions, can also be applied to human-AI interaction as well.
Another factor in the research focused on the amount of personal information the patient felt comfortable providing the AI doctors. Patients were given a choice about personal information the AI doctor should retain for future interactions. The researchers found that patients were more comfortable when they felt they could control the amount of personal information that was provided to the AI doctors.
“So, now the patients are not only happy, but they also have the autonomy and control and the agency to control their data,” Liao said. “Privacy control is important.”
Liao says this is important research for users to be mindful of.
“When an AI doctor remembers you, you might trust it more,” Liao said.
She continues by saying a lot of times we look at the research from an AI performance standpoint, but it’s important to look at the psychology of the process and consider it from the user’s standpoint.
Liao, who is completing her first semester on the Grady College faculty, is interested in AI and how users form trust.
She talks about AI and her research with students and says that a good example of using AI in the classroom would be brainstorming ideas, but warns that things can get complicated when gathering information if the tools draw on unreliable resources or if the prompts are not clear.
“I believe we can leverage the benefits of AI and foster better human AI collaboration and increase people’s literacy about AI, so we can use this tool more critically and mindfully,” Liao said.
Author: Sarah Freeman, freemans@uga.edu