“I’ve been interested in the topic of free will for a while,” Frank Martela tells me. Martela is a philosopher and researcher of psychology at Aalto University, in Finland. His work revolves around the fundamentals of the human condition and the perpetual philosophical question what makes a good life? But his work on humans took a detour to look at artificial intelligence (AI).
“I was following stories about the latest developments in large language models, it suddenly came to my mind that they actually fulfill the three conditions for free will.”

Martela’s latest study draws on the concept of functional free will.
Functional free will is a term that attempts to reconcile the age-old debate between determinism and free agency. It does this not by answering whether we are “truly free” in an absolute sense, but by reframing the question around how free will works in practice, especially in biological and psychological systems.
“It means that if we can explain somebody’s behavior without assuming that they have free will, then that somebody has free will. In other words, if we observe something (a human, an animal, a machine) ‘from the outside’ and must assume that it makes free choices to be able to understand its behavior, then that something has free will.”
Does AI have free will?
Martela argues that functional free will is the best way to go about it, because we can’t really ever observe anything “from the inside.” He builds on the work of philosopher Christian List, who frames free will as a three-part capacity involving:
- intentional agency, meaning their actions stem from deliberate intentions rather than being reflexive or accidental.
- alternative possibilities, having access to more than one course of action in meaningful situations. This doesn’t require escaping causality but having internal mechanisms (like deliberation and foresight) that allow for multiple real options
- and causal control meaning their actions are not random or externally coerced, but are caused by their own states or intentions.
“If something meets all three conditions, then we can’t but conclude that it has free will,” Martela tells ZME Science.

The new study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional killer drones with the cognitive function of today’s unmanned aerial vehicles.
<!– Tag ID: zmescience_300x250_InContent_3
–>
‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs.
Why does this matter?
Defining free will is far from a settled question. Philosophers have argued about it for centuries, and will likely continue to do so for centuries. But this study has very practical significance.
“It makes it more possible to blame AI for what it has done, and teach it to correct its behavior. But it does not free the developer from responsibility. Similarly, if a dog attacks a child, we blame the dog for bad behavior and try to teach it to not attack people. However, this does not free the dog-owner from responsibility. They must either teach the dog to behave or make sure it does not end up in situations where it can misbehave. The same applies for AI drones. We can blame the drone but the developer still carries the main responsibility.”
The “dog” in this case (the AI) is becoming more and more powerful. We’re using it to make medical diagnoses, screen job applicants, guide autonomous vehicles, determine creditworthiness, and even assist in military targeting decisions—tasks that carry significant ethical weight and demand accountability.
Martela believes we should give AI a moral compass. It takes children years to learn how to behave, and it doesn’t always work. “It isn’t any easier to teach AI and thus it takes considerable effort to teach them all the relevant moral principles so they would behave in the right way,” the researcher adds.
AI has no moral compass unless it is programmed to have one. But the more freedom you give it, the more you need to know it has moral values.
Companies are already imparting moral values to AI
Companies are already working on this in some ways. They teach models what responses are not allowed (ie harmful or racist) and what knowledge they should not share (ie how to make a bomb). They also have a measure of how friendly and responsive they should be. The latest version of ChatGPT was withdrawn because it had sycophantic tendencies. It was too eager to please; something in its moral compass was off.
“So they are already programming a lot of behavioral guidelines and rules into their LLM models that guide them to behave in certain ways. What the developers need to understand is that what they are in effect doing is teach moral rules to the AI, and take full responsibility for the kind of rules they teach them.”
By instructing AI how to behave, developers are imparting their own companies’ moral values to the AI. This risks embedding narrow, biased, or culturally specific moral frameworks into technologies that will operate across diverse societies and affect millions of lives. When developers—often a small, homogeneous group—teach AI how to “behave,” they are not just writing code; they are effectively encoding ethical judgments that may go unquestioned once embedded. We’re essentially having tech companies impart their own values on tools that will shape society.
Without a deep understanding of moral philosophy and pluralistic ethics, there’s a real danger that AI systems will perpetuate one group’s values while ignoring or marginalizing others. That’s why it’s important to give AI its own, proper, moral compass.
Journal Reference: 10.1007/s43681-025-00740-6