Modeling Adaptive Expression of Robot Learning Engagement and Exploring Its Effects on Human Teachers

Shuai Ma, Mingfei Sun, XIAOJUAN MA

Research output: Contribution to journalArticlepeer-review

Abstract

Robot Learning from Demonstration (RLfD) allows non-expert users to teach a robot new skills or tasks directly through demonstrations. Although modeled after human-human learning and teaching, existing RLfD methods make robots act as passive observers without the feedback of their learning statuses in the demonstration gathering stage. To facilitate a more transparent teaching process, we propose two mechanisms of Learning Engagement, Z2O-Mode and D2O-Mode, to dynamically adapt robots’ attentional and behavioral engagement expressions to their actual learning status. Through an online user experiment with 48 participants, we find that, compared with two baselines, the two kinds of Learning Engagement can lead to users’ more accurate mental models of the robot’s learning progress, more positive perceptions of the robot, and better teaching experience. Finally, we provide implications for leveraging engagement expression to facilitate transparent human-AI (robot) communication based on our key findings.
Original languageEnglish
Article number70
Number of pages48
JournalACM Transactions on Computer-Human Interaction
Volume30
Issue number5
Early online date19 Nov 2022
DOIs
Publication statusPublished - 1 Oct 2023

Keywords

  • Human-robot interaction
  • Learning from demonstration
  • Robot engagement
  • Robot teaching
  • Transparent AI

Fingerprint

Dive into the research topics of 'Modeling Adaptive Expression of Robot Learning Engagement and Exploring Its Effects on Human Teachers'. Together they form a unique fingerprint.

Cite this