Human Robot Interaction (HRI)

Causes and calibrations of trust in robots

This research investigates whether a robot that communicates human norm understanding can calibrate human trust in the robot. In future soldier-robot teams, human trust in the robotic teammate(s) will be needed to support successful peer-to-peer interactions. It is important to investigate means by which the actions and capabilities of the robot can help to mitigate such violations and appropriately calibrate the human’s level of trust in the robotic teammate. For instance, a robot that communicates what it perceives to be the relevant norms in the situation may signal that the robot has critical coordinating mechanisms that underlie the team’s goals across missions and tasks. Team actions, in particular, rely on norms to coordinate, streamline, and legitimize team behavior; and team members that follow these norms can be trusted and relied on. It therefore stands to reason that humans who act jointly with robots in teams expect their robot partners to be aware of and follow many of the same norms that they themselves follow. If robots in fact do so, human team members are likely to trust them, and justifiably so.

Measuring humanlikeness

In the current project, we aim to unpack the abstract yet important concept of “human-likeness” and further investigate how the appearance of a humanoid robot influences people’s perception of the robot. We will take two steps to achieve this goal: First, we will focus on the physical appearance of the robot—how physically humanlike a robot looks; we will categorize human-like features in robots to theorize a systematic progress—a theory-driven scale—on robots’ physical similarity to humans. Second, we will examine how a robot’s physical similarity to a human may influences people’s assumptions regarding the mental capacities of robots.

Psychosocial Support for Type I Diabetes Provided by Jerry the Bear

Sproutel has developed and produced Jerry the Bear, an interactive toy for children with type 1 diabetes that helps them learn about medical procedures and treatment through play. Sproutel and Brown University’s HCRI are partnering to assess the effectiveness of the Jerry the Bear platform and a new prototype for delivering healthcare information to children. The effectiveness of the current iteration of Jerry the Bear and the new prototype will be tested against one another, and against participant baselines regarding diabetes management prior to receiving Jerry the Bear. This research will help Sproutel improve their Jerry the Bear product, will assist in the sales process and expansion of the Jerry the Bear platform, and should help the team gain knowledge regarding the psychosocial benefits of Jerry the Bear.

Examining participant drawings of robots

The purpose of this paper is to report on research conducted to gain an understanding of user expectations regarding robotic form across several domains. We will report on three independent studies conducted across three Universities in two countries. Each study was designed to gain a better understanding of user expectations of robot form.

Media outlets

IEEE Spectrum video on our moral robotics work

References

Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 125-132). IEEE.

Zhao, X., Cusimano, C., and Malle, B. F. (2016). Do people spontaneously take a robot’s visual perspective? In HRI ’16: Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand (pp. 335-342).

Zhao, X., Cusimano, C., and Malle, B. F. (2015). In Search of Triggering Conditions for Spontaneous Visual Perspective Taking. In Noelle, D. C., Dale, R., Warlaumont, A. S., Yoshimi, J., Matlock, T., Jennings, C. D., & Maglio, P. P. (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (pp. 2811-2816). Austin, TX: Cognitive Science Society.

Malle, B. F., Scheutz, M., Voiklis, J., Arnold, T., and Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In HRI'15: Proceedings of the Tenth Annual 2015 ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR (pp. 117-124). New York, NY: ACM. doi:10.1145/2696454.2696458

Zhao, X., Cusimano, C., and Malle, B. F. (2015). Do people spontaneously take a robot’s visual perspective? In HRI'15: Proceedings of the Tenth Annual 2015 ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR: Extended Abstracts. doi:10.1145/2701973.2702044

Malle, B. F. (2014). Moral competence in robots? In Seibt, J., Hakli, R., and Nørskov, M. (Eds.), Sociable robots and the future of social relations: Proceedings of Robo-Philosophy 2014 (pp. 189-198). Amsterdam, Netherlands: IOS Press.

Malle, B. F., and Scheutz, M. (2014). Moral competence in social robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics’2014 (pp. 30–35). Red Hook, NY: Curran Associates/IEEE Computer Society. doi:10.1109/ETHICS.2014.6893446

Scheutz, M., and Malle, B. F. (2014). “Think and do the right thing” – a plea for morally competent autonomous robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics’2014 (pp. 36–39). Red Hook, NY: Curran Associates/IEEE Computer Society. doi:10.1109/ETHICS.2014.6893457