Topics:
1. Morally competent robots
As part of a large collaborative project with Matthias Scheutz and Selmer Bringsjord, we have identified the components of human moral competence and begun to ask what it would take to implement some of these components in a robotic agent (Malle & Scheutz, 2014, 2019). Our initial focus has been on three aspects: (1) What would a norm system look like in a robot when it is constrained to share some fundamental properties of human norm systems? (2) What would constitute a foundational moral vocabulary that would allow an artificial agent to both learn and communicate about moral events and use this information to guide its own actions? (3) How do people perceive robots that make moral decisions, and how, in particular, might those perceptions differ from how they perceive humans that make those same moral decisions?
Some work on the properties of norms is described here (see also Malle, Scheutz & Austerweil, 2017); work on the building of a moral vocabulary is described here. The question of how humans perceive robots (or AI) that make moral decisions inspired a series of studies (Malle et al., 2015, 2016; Malle, Thapa Magar, & Scheutz, 2019) in which we made three discoveries: First, about two thirds of people have no problem treating robots or AI as moral decision makers and systematically assign blame to them (following familiar information processing rules; Voiklis, Kim, Cusimano, & Malle, 2016). Second, when people morally evaluate the artificial agents, they apply the same norms of what the robot agent
2. How humans explain robot behaviors
Increasingly there is a call for making robots (and indeed, any autonomous intelligent agent) transparent and understandable. This means that designers and researchers need to know what it takes for a human to "understand" a robot. Our thesis is that people will regard most autonomous machines as intentional agents and apply the conceptual framework and psychological mechanisms of
3. Causes and calibrations of trust in robots
In future human-robot teams, human trust in the robotic teammate will be needed to support successful interactions. This project investigates whether a robot that communicates its understanding of applicable norms instills calibrated justified human trust in the robot. Joint actions among two or more partners rely on norms to coordinate, streamline, and legitimize behavior; and partners who follow these norms can be trusted and relied on. It therefore stands to reason that humans who act jointly with robots expect their robot partners to be aware of and follow the appropriate norms (both the ones that apply to all teammates and those that may apply to only the robot). If robots communicate their awareness of those norms and act in accordance with them, the human partners gain justified trust in the robot's behavior. If robots violate these norms, they must explain and justify their behavior in ways that are understandable to people. (This project therefore dovetails with the above project on explanations of robot behavior.) Currently we are developing a paradigm that simulates joint human-robot actions to experimentally assess the impact of norm communication and explanation on trust. We are also developing a new measure of trust that is both short and covers the multi-dimensionality of trust (from competence and reliability to sincerity and ethical integrity; see Ullman & Malle, 2018). With this measure in hand, we can track in precise and practical ways how much trust, and which kind of trust, people invest in robots.
Measuring humanlikeness
In initial work we have shown that people take the visual perspective of a robot to the extent that it looks human-like (e.g., not for Thymio, moderately for Nao, and strongly for the android Erica -- provide links to ABOT locations). In fact, the effect of human-likeness on perspective taking is so strong and automatic that even when Erica, the android, is introduced as a "mannequin," it triggers strong perspective taking (Zhao, Cusimano, & Malle, 2016, in preparation).
In an additional project, we are unpacking the abstract concept of "human-likeness" and investigate which specific features of appearance influences people's perceptions of a humanoid robot. We have catalogued the specific features that make up a human-like appearance and identified a small number of fundamental dimensions that group these features and that determine people's overall perceptions of human-likeness (Phillips, Zhao, Ullman, & Malle, 2018). We have also begun to examine which features and dimensions of human-likeness influence people's judgments of a robot's social and psychological capacities -- how much "mind" it contains and what kind of mind (Zhao, Phillips, & Malle, in preparation).
4. Applied research on robotic technology
The Providence company Sproutel has developed and produced Jerry the Bear, an interactive toy for children with type 1 diabetes that helps them learn about medical procedures and treatment through play. Sproutel and our lab, together with Brown University's Humanity Centered Robotics Initiative are partnering to assess the effectiveness of the Jerry the Bear platform and a new prototype for delivering healthcare information to children. This research will help Sproutel improve their Jerry the Bear product and gain knowledge regarding the psychosocial benefits of Jerry the Bear.
Current and future health-care personnel, along with family caregivers, will be unable to treat or even monitor the wide-ranging challenges of the aging population -- from anxiety to loneliness, from dementia to physical disability. With a project we call "Affordable Robotic Intelligence for Elderly Support" (ARIES), we hope to provide one element in alleviating these challenges. ARIES will provide affordable assistance with small but challenging tasks of daily living, such as finding keys or reading glasses; remembering medication or appointments; connecting with friends and family; and relieving agitation and loneliness. In collaboration with Ageless Innovation we expand an existing animal-like robot and install advanced capabilities of object tracking, communication, and machine learning. The new robot will help older adults with finding misplaced objects and remembering medication; calm anxiety and connect with loved ones and caregivers; and learn about habits and detect deviations such as falls.
Press and Media products
References
de Graaf, M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In 2017 AAAI Fall Symposium Series Technical Reports (pp. 19-26). Palo Alto, CA: AAAI Press.
de Graaf, M. M. A, & Malle, B. F. (2018). People's judgments of human and robot behaviors: A robust set of behaviors and some discrepancies. In Companion of the International Conference on Human-Robot Interaction, HRI'18 (pp. 97-98). New York, NY: ACM. doi:10.1145/3173386.3177051
de Graaf, M. M. A, & Malle, B. F. (2019). People's explanations of robot behavior subtly reveal mental state inferences. In Proceedings of the International Conference on Human-Robot Interaction, HRI'19. New York, NY: ACM.
Malle, B. F., and Scheutz, M. (2014). Moral competence in social robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics 2014 (pp. 30-35). Red Hook, NY: Curran Associates/IEEE Computer Society. doi:10.1109/ETHICS.2014.6893446
Malle, B. F., and Scheutz, M. (2019). Learning how to behave: Moral competence for social robots..In O. Bendel (Ed.), Handbuch Maschinenethik \[Handbook of machine ethics\], Springer Reference Geisteswissenschaften. Wiesbaden, Germany: Springer. doi:10.1007/978-3-658-17484-2_17-1
Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In Proceedings of the Eleventh Annual Meeting of the IEEE Conference on Human-Robot Interaction, HRI'16 (pp. 125-132). Piscataway, NJ: IEEE Press. doi:10.1109/HRI.2016.7451743
Malle, B. F., Scheutz, M., Voiklis, J., Arnold, T., and Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In HRI'15: Proceedings of the Tenth Annual 2015 ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR (pp. 117-124). New York, NY: ACM. doi:10.1145/2696454.2696458
Malle, B. F., Thapa Magar, S., Scheutz, M. (2019). AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In I. Aldinhas Ferreira, J. Silva Sequeira, G. S. Virk, E. E. Kadar, and O. Tokhi (Eds.), Robots and well-being. Cham, Switzerland: Springer Verlag.
Phillips, E., Zhao, X., Ullman, D., & Malle, B. F. (2018). What is human-like? Decomposing robots' human-like appearance using the Anthropomorphic roBOT (ABOT) Database. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI '18 (pp. 105-113). New York, NY: ACM. doi:10.1145/3171221.3171268
Scheutz, M., and Malle, B. F. (2014). "Think and do the right thing": A plea for morally competent autonomous robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology, Ethics 2014 (pp. 36-39). Red Hook, NY: Curran Associates/IEEE Computer Society. doi:10.1109/ETHICS.2014.6893457
Ullman, D., & Malle, B. F. (2018). What does it mean to trust a robot? Steps toward a multidimensional measure of trust. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI '18 (pp. 263-264). New York, NY: ACM. doi:10.1145/3173386.3176991
Voiklis, J., Kim, B., Cusimano, C., and Malle, B. (2016). Moral judgments of human vs. robot agents. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) (pp. 775-780). Piscataway, NJ: IEEE. doi: 10.1109/ROMAN.2016.7745207
Zhao, X., Cusimano, C., and Malle, B. F. (2016). Do people spontaneously take a robot's visual perspective? In HRI'16: Proceedings of the Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand (pp. 335-342).