Intentionality | Malle Lab
Human Robot Interaction (HRI)

Morally competent robots

As part of a large collaborative project with Matthias Scheutz and Selmer Bringsjord, we have identified the components of human moral competence and begun to ask what it would take to implement some of these components in a robotic agent (Malle & Scheutz, 2014). Our initial focus has been on three aspects: (1) What would a norm system look like in a robot when it is constrained to share some fundamental properties of human norm systems? (2) What would constitute a foundational moral vocabulary that would allow an artificial agent to both learn and communicate about moral events and use this information to guide its own actions? (3) How do people perceive robots that make moral decisions, and how, in particular, might those perceptions differ from how they perceive humans that make those same moral decisions?

Some work on the properties of norms is described here (see also Malle, Scheutz & Austerweil, 2017); work on the building of a moral vocabulary is described here. The question of how humans perceive robots (or AI) that make moral decisions inspired a series of studies (Malle et al., 2015, 2016; Malle, Thapa Magar, & Scheutz, 2018) in which we made three discoveries: First, about two thirds of people have no problem treating robots or AI as moral decision makers and systematically assign blame to them (following familiar information processing rules; Voiklis, Kim, Cusimano, & Malle, 2016). Second, when people morally evaluate the artificial agents, they apply the same norms of what the robot agent should do as they apply to humans. Third, however, they do not alway assign the same degrees of blame to the artificial agents as they do to humans. Specifically, people sometimes blame humans less and robots more for actions that come with a justifying reason only when it is performed by a human. For example, people blame humans less for refusing to sacrifice an individual and save four people in a moral dilemma because they can understand (and simulate in their own mind) how difficult it must be to make this sacrifice. This kind of simulation and understanding are not available when considering and evaluating a robot's actions. Strikingly, however, when the robot is described as "struggling with" the decision, people also blame that robot less, and no more than they blame the human (in preparation).

How humans explain robot behaviors

Increasingly there is a call for making robots (and indeed, any autonomous intelligent agent) transparent and understandable. This means that designers and researchers need to know what it takes for a human to "understand" a robot. Our thesis is that people will regard most autonomous machines as intentional agents and apply the conceptual framework and psychological mechanisms of human behavior explanations to them (de Graaf & Malle, 2017). In this project we use a well-supported theory of how people explain human behavior (Malle, 2004, 2011; see here) to examine how people explain robot behavior. As a first step we have identified behaviors that are perceived very similarly when performed by humans and robots (similar in terms of intentionality, valence, and surprisingness; de Graaf & Malle, 2018), and in a second step we have collected a large number of explanations to document how people's explanations resemble and differ from corresponding explanations of human behavior. The payoffs will be considerable: not only do we gain insight into people's perceptions of robots as intentional agents (and likely future members of human communities) but also provide a template for how robots could explain their own behavior in ways that are understandable to people.

Causes and calibrations of trust in robots

In future human-robot teams, human trust in the robotic teammate will be needed to support successful interactions. This project investigates whether a robot that communicates its understanding of applicable norms instills calibrated justified human trust in the robot. Joint actions among two or more partners rely on norms to coordinate, streamline, and legitimize behavior; and partners who follow these norms can be trusted and relied on. It therefore stands to reason that humans who act jointly with robots expect their robot partners to be aware of and follow the appropriate norms (both the ones that apply to all teammates and those that may apply to only the robot). If robots communicate their awareness of those norms and act in accordance with them, the human partners gain justified trust in the robot's behavior. If robots violate these norms, they must explain and justify their behavior in ways that are understandable to people. (This project therefore dovetails with the above project on explanations of robot behavior.) Currently we are developing a paradigm that simulates joint human-robot actions to experimentally assess the impact of norm communication and explanation on trust. We are also in the final stages of developing a new measure of trust that is both short and covers the multi-dimensionality of trust (from competence and reliability to sincerity and ethical integrity; see Ullman & Malle, 2018). With this measure in hand, we can track in precise and practical ways how much trust, and which kind of trust, people invest in robots.

Measuring humanlikeness

In initial work we have shown that people take the visual perspective of a robot to the extent that it looks human-like (e.g., not for Thymio, moderately for Nao, and strongly for the android Erica -- provide links to ABOT locations). In fact, the effect of human-likeness on perspective taking is so strong and automatic that even when Erica, the android, is introduced as a "mannequin," it triggers strong perspective taking (Zhao, Cusimano, & Malle, 2016, in preparation). In an additional project, we are unpacking the abstract concept of "human-likeness" and investigate which specific features of appearance influences people's perceptions of a humanoid robot. We have catalogued the specific features that make up a human-like appearance and identified a small number of fundamental dimensions that group these features and that determine people's overall perceptions of human-likeness (Phillips, Zhao, Ullman, & Malle, 2018). We have also begun to examine which features and dimensions of human-likeness influence people's judgments of a robot's social and psychological capacities -- how much "mind" it contains and what kind of mind (Zhao, Phillips, & Malle, in preparation).

Applied research on robotic technology

The Providence company Sproutel has developed and produced Jerry the Bear, an interactive toy for children with type 1 diabetes that helps them learn about medical procedures and treatment through play. Sproutel and our lab, together with Brown University's Humanity Centered Robotics Initiative are partnering to assess the effectiveness of the Jerry the Bear platform and a new prototype for delivering healthcare information to children. This research will help Sproutel improve their Jerry the Bear product and gain knowledge regarding the psychosocial benefits of Jerry the Bear.

Current and future health-care personnel, along with family caregivers, will be unable to treat or even monitor the wide-ranging challenges of the aging population -- from anxiety to loneliness, from dementia to physical disability. With a project we call "Affordable Robotic Intelligence for Elderly Support" (ARIES), we hope to provide one element in alleviating these challenges. ARIES will provide affordable assistance with small but challenging tasks of daily living, such as finding keys or reading glasses; remembering medication or appointments; connecting with friends and family; and relieving agitation and loneliness. In collaboration with Ageless Innovation we expand an existing animal-like robot and install advanced capabilities of object tracking, communication, and machine learning. The new robot will help older adults with finding misplaced objects and remembering medication; calm anxiety and connect with loved ones and caregivers; and learn about habits and detect deviations such as falls.

Press and Media products

Earlier press coverage and Colbert

IEEE Spectrum video on our moral robotics work

References

de Graaf, M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In 2017 AAAI Fall Symposium Series Technical Reports (pp. 19-26). Palo Alto, CA: AAAI Press.

de Graaf, M. M. A, & Malle, B. F. (2018). People's judgments of human and robot behaviors: A robust set of behaviors and some discrepancies. In Companion of the International Conference on Human-Robot Interaction, HRI '18 (pp. 97-98). New York, NY: ACM. doi:10.1145/3173386.3177051

Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In Proceedings of the Eleventh Annual Meeting of the IEEE Conference on Human-Robot Interaction, HRI'16 (pp. 125-132). Piscataway, NJ: IEEE Press. doi:10.1109/HRI.2016.7451743

Malle, B. F., Scheutz, M., Voiklis, J., Arnold, T., and Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In HRI'15: Proceedings of the Tenth Annual 2015 ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR (pp. 117-124). New York, NY: ACM. doi:10.1145/2696454.2696458

Malle, B. F., and Scheutz, M. (2014). Moral competence in social robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics 2014 (pp. 30-35). Red Hook, NY: Curran Associates/IEEE Computer Society. doi:10.1109/ETHICS.2014.6893446

Phillips, E., Zhao, X., Ullman, D., & Malle, B. F. (2018). What is human-like? Decomposing robots' human-like appearance using the Anthropomorphic roBOT (ABOT) Database. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI '18 (pp. 105-113). New York, NY: ACM. doi:10.1145/3171221.3171268

Scheutz, M., and Malle, B. F. (2014). "Think and do the right thing": A plea for morally competent autonomous robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics 2014 (pp. 36-39). Red Hook, NY: Curran Associates/IEEE Computer Society. doi:10.1109/ETHICS.2014.6893457

Ullman, D., & Malle, B. F. (2018). What does it mean to trust a robot? Steps toward a multidimensional measure of trust. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI '18 (pp. 263-264). New York, NY: ACM. doi:10.1145/3173386.3176991

Voiklis, J., Kim, B., Cusimano, C., and Malle, B. (2016). Moral judgments of human vs. robot agents. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) (pp. 775-780). Piscataway, NJ: IEEE. doi: 10.1109/ROMAN.2016.7745207

Zhao, X., Cusimano, C., and Malle, B. F. (2016). Do people spontaneously take a robot's visual perspective? In HRI'16: Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand (pp. 335-342).