Hi, I’m Madeline G. Reinecke. I also go by “Gracie.”

I’m a moral psychologist / cognitive scientist. I research moral cognition, both in humans and in artificial intelligence.

I recently earned my PhD from Yale University (Fall 2023) and began a postdoctoral fellowship in Collective Moral Development at the University of Oxford. I’m jointly housed within the Psychiatry Department (NEUROSEC) and the Uehiro Centre for Practical Ethics (bioXphi).

Before Yale, I studied Psychology and Philosophy at the University of Illinois at Urbana-Champaign. I interned at Google DeepMind in 2022.

Here’s a recent version of my CV.

I’m always happy to chat, so please feel free to reach out.

e-mail: madeline.reinecke[at]psych.ox.ac.uk

CURRENT PROJECTS (HUMANs)

Humans face a moral puzzle. We have to determine who (or what) matters, morally speaking. Of all the potential creatures we could care about, who should we care about? Some such judgments come easily: Babies and young children, for example, seem obvious candidates for having moral rights and deserving our protection from harm. Yet other determinations seem less obvious. Does an unborn fetus have the same value as a two-year-old? Does the family dog count just as much as a human relative — or is it true that humans always matter more than non-humans?

These are difficult questions to answer. In this branch of my research, I investigate how children and adults weigh others’ mental capacities when determining their capacity to suffer (and whether that suffering matters, morally speaking).

moral standing

Example publication: Reinecke, M.G., Wilks, M., & Bloom, P. (2021) [paper]

Imagine that morality was entirely different from the way it is — what seems morally wrong is actually morally right, and vice versa. Would such changes even be possible?

In this branch of my research, I investigate the intersection of modal cognition (i.e., cognition about possibility) and moral cognition. In a series of studies with children and adults, I find that there are some aspects of morality (e.g., hurting others for fun) that people see as absolutely unchangeable — shedding new light on what people see as most fundamental to the moral and social world.

MORALITY + POSSIBILITY

Example publication: Reinecke, M.G. & Solomon, L.H. (2023) [paper]

CURRENT PROJECTS (AI)

There exists a virtuous circle between developmental psychology and artificial intelligence — insights from one domain can inform developments in the other. Over decades of empirical research, developmental psychologists have created strong tests for tracking human moral development. Can these methods be translated into tests for artificial moral intelligence?


In this branch of my research, I consider how insights from developmental moral psychology can assist the creation of safe and reliable AI.

ANALYZING ARTIFICIAL MORAL COGNITION

Example publication: Weidinger, L., Reinecke, M.G., & Haas, J. (2022) [paper]

Human morality hinges not only on what other agents do, but why they do it. Can we tell whether an agent acts ‘for the right reasons,’ strictly by observing their behavior? Relative to non-moral action, morally-motivated behavior should persist despite mounting cost. By gauging an agent’s insensitivity to cost, we can gain deeper insight into whether their underlying motivations qualify as moral.

In this branch of my research, I examine the relationship between cost insensitivity and agent motivation from a theoretical and technical perspective.

evaluating agent motivation

NeurIPS (technical paper): Mao* & Reinecke* et al. (2023) [paper]

Theoretical paper: Reinecke* & Mao* et al. (2023) [paper]