I primarily conduct research at the intersection of moral psychology and AI, but I also research moral cognition and its development, more broadly.
I've listed some of my current projects and interests below. Click the drop-down arrow to learn more.
I primarily conduct research at the intersection of moral psychology and AI, but I also research moral cognition and its development, more broadly.
I've listed some of my current projects and interests below. Click the drop-down arrow to learn more.
Human social relationships are complex. We maintain differing expectations about any number of potential relationships — for example, how a doctor ought to act with their patients, a manager with their employees, and a parent with their children. What someone might expect in an ideal “manager-employee” relationship is likely distinct from what someone might expect in an ideal “romantic” relationship. How will the rise of artificial intelligence complicate people’s expectations about the function of certain relationships?
In this branch of my research, I trace how cooperative norms differ across different kinds of human-human and human-AI relationships.
There exists a virtuous circle between developmental psychology and artificial intelligence — insights from one domain can inform developments in the other. Over decades of empirical research, developmental psychologists have created strong tests for tracking human moral development. Can these methods be translated into tests for artificial moral intelligence?
In this branch of my research, I consider how insights from developmental moral psychology can assist the creation of safe and reliable AI systems.
Example publication: Weidinger, L., Reinecke, M.G., & Haas, J. (2022) [paper]
Human morality hinges not only on what other agents do, but why they do it. Can we tell whether an agent acts ‘for the right reasons,’ strictly by observing their behavior? Relative to non-moral action, morally-motivated behavior should persist despite mounting cost. By gauging an agent’s insensitivity to cost, we can gain deeper insight into whether their underlying motivations qualify as moral.
In this branch of my research, I examine the relationship between cost insensitivity and agent motivation from a theoretical and technical perspective.
Theoretical paper: Reinecke* & Mao* et al. (2023)
NeurIPS (technical paper): Mao* & Reinecke* et al. (2023)
Humans face a moral puzzle. We have to determine whom (or what) matters, morally speaking. Of all the potential creatures we could care about, whom should we care about? Some such judgments come easily: Babies and young children, for example, seem obvious candidates for having moral rights and deserving our protection from harm. Yet other determinations seem less obvious. Does an unborn fetus have the same value as a two-year-old? Does the family dog count just as much as a human relative — or is it true that humans always matter more than non-humans?
These are difficult questions to answer. In this branch of my research, I investigate how children and adults weigh others’ mental capacities and species membership when determining their moral standing.
Example publication: Reinecke, M.G., Wilks, M., & Bloom, P. (2025) [paper]
Imagine that morality was entirely different from the way it is — what seems morally wrong is actually morally right, and vice versa. Would such changes even be possible?
In this branch of my research, I investigate the intersection of modal cognition (i.e., cognition about possibility) and moral cognition. In a series of studies with children and adults, I find that there are some aspects of morality (e.g., hurting others for fun) that people see as absolutely unchangeable — shedding new light on what people see as most fundamental to the moral and social world.
Example publication: Reinecke, M.G. & Solomon, L.H. (2023) [paper]