Investigating cooperation with robotic peers

Debora Zanatto, Massimiliano Patacchiola, Jeremy Goslin, Angelo Cangelosi

Research output: Contribution to journalArticlepeer-review

Abstract

We explored how people establish cooperation with robotic peers, by giving participants the chance to choose whether to cooperate or not with a more/less selfish robot, as well as a more or less interactive, in a more or less critical environment. We measured the participants' tendency to cooperate with the robot as well as their perception of anthropomorphism, trust and credibility through questionnaires. We found that cooperation in Human-Robot Interaction (HRI) follows the same rule of Human-Human Interaction (HHI), participants rewarded cooperation with cooperation, and punished selfishness with selfishness. We also discovered two specific robotic profiles capable of increasing cooperation, related to the payoff. A mute and non-interactive robot is preferred with a high payoff, while participants preferred a more human-behaving robot in conditions of low payoff. Taken together, these results suggest that proper cooperation in HRI is possible but is related to the complexity of the task.
Original languageEnglish
Article numbere0225028
Pages (from-to)1-17
Number of pages17
JournalP L o S One
Volume14
Issue number11
Early online date20 Nov 2019
DOIs
Publication statusPublished - 2020

Fingerprint

Dive into the research topics of 'Investigating cooperation with robotic peers'. Together they form a unique fingerprint.

Cite this