New Language Model Uses Texts to Predict How Groups of People Perceive AI agents

ATLANTA (May 2021) – New Georgia Tech research shows that text chats between people and an AI agent can be used to automatically predict perceptions that a group of users  (e.g. a class of students) have about the AI agent  – specifically attributes for how human-like, intelligent, and likeable the AI is.

The research team analyzed linguistic cues (e.g. diversity and readability of the messages) in text chats that users sent to Jill Watson, the AI agent used in several Georgia Tech online computer science graduate courses. Leveraging these linguistic cues extracted from the messages, the researchers were able to build a model that can predict the community’s perception of Jill on the aforementioned three attributes.

Before the course began, students gave initial ratings for their perceptions of Jill. Many gave the AI agent very high scores on all attributes, ranging from 3 to 4 on a five-point likert scale.

Qiaosi “Chelsea” Wang

Students were surveyed every other week on their perceptions of Jill, and the language model accurately predicted how students felt about the AI in all three attributes, matching the results from the surveys.

“We found that several linguistic characteristics accurately predicted the community’s perception about Jill Watson through our linear regression models,” said Qiaosi “Chelsea” Wang, lead researcher and a Human-Centered Computing Ph.D. student in the Design and Intelligence Lab. “For example, linguistic adaptability – which measures how adaptive Jill’s responses are to student questions – positively associates with student perceptions.”

In a 12-week study with a class of 390 students, student perceptions of Jill in terms of human-likeness and intelligence decreased while likeability stayed the same.

Jill’s likeability factor remaining consistent through the course is perhaps an indication that Jill is meeting student expectations overall despite the decrease in perceived intelligence and human-likeness.

Students were told Jill could only answer course logistics and curriculum questions, but that didn’t stop them from testing the AI system with questions like “What is the meaning of life?” or “What is your favorite Game of Thrones character?”

“People are often able to perceive another person’s perception of them through behavioral and linguistic cues and modify their behaviors accordingly in different settings,” said Wang. “We’re showing that it is possible to build adaptive community-facing AI agents that can automatically understand how the agents are being perceived by the community.

“One possible scenario that could happen based on our findings is that if the community thinks Jill is extremely intelligent, Jill would be able to understand this from the language students used when talking to Jill. This might prompt the AI to provide a response with self-deprecating humor such as “I’m actually not as smart as you thought I was 🙁. ”

Jill Watson, 5 years old this spring, was one of the earlier AI agents used in an online human community. Conversational agents designed for a community-context, instead of one-on-one interactions, are now becoming more common (e.g. Amazon’s Alexa in family contexts).

With more virtual teaching assistants like Jill emerging to help support student communities, Wang said it’s very important to understand how these conversational agents are perceived by an entire community, not just the commonly examined one-on-one interactions.

The research is part of the proceedings of the Association for Computing Machinery’s annual conference on Human Factors in Computing (CHI), May 8-13, 2021. The accepted paper “Towards Mutual Theory of Mind in Human-AI Interaction: How Language Reflects What Students Perceive About a Virtual Teaching Assistant” is co-authored by Wang, Koustuv Saha, Eric Gregori, David A. Joyner, and Ashok Goel.

Writer/Contact

Joshua Preston
Communications Manager
College of Computing