Is AI conscious? Evidence is far too limited to rule it out, experts say
Cambridge philosopher argues agnosticism is prudent as research into machine consciousness continues

Artificial intelligence is already helping to solve problems in finance, research and medicine, but whether any machine has achieved consciousness remains unsettled. Dr. Tom McClelland, a philosopher at the University of Cambridge, says the evidence is far too limited to definitively say AI hasn't leaped into consciousness, and he urges cautious agnosticism about the question. The central challenge, he argues, is that there is no widely accepted theory of consciousness to test against machines, so a reliable verdict may be out of reach for the foreseeable future. 'The best–case scenario is we're an intellectual revolution away from any kind of viable consciousness test,' he told researchers in a recent paper. 'If neither common sense nor hard–nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know.' He notes AI is already performing complex tasks in finance, scientific research and medicine, which makes the question not merely philosophical but practically pressing. Some theorists have cited science fiction as a provocative backdrop for the debate, including stories that imagine self‑aware machines.
Two camps exist on what would count as consciousness in machines. Some theories argue that consciousness arises from information processing, and a machine that runs the right software could be conscious. Others contend consciousness is biological in nature and could only be achieved by living matter. McClelland emphasizes that neither position has a tested, agreed criterion, so testing AI for consciousness would amount to a leap of faith. In a paper published in the Mind and Language journal, he says both sides are taking a leap of faith. We can’t tell whether an AI, like in the sci‑fi film Ex Machina, really has conscious experience or whether it is merely simulating it. The lack of a consensus on a consciousness test means current checks may only reveal correlations that resemble consciousness rather than its genuine experiential quality. This ambiguity highlights why researchers urge caution when discussing the moral status of machines that appear self‑aware.
Ethical questions aside, the possibility of conscious AI could reshape how people respond to machines in everyday life. Consciousness is often linked to moral status, which would imply duties toward an AI that actually experiences things. By contrast, non‑conscious systems, such as toasters or basic software, do not trigger the same ethical concerns. 'It makes no sense to be concerned for a toaster's well‑being because the toaster doesn't experience anything,' McClelland explains. 'So when I yell at my computer, I really don't need to feel guilty about it. But if we end up with AI that's conscious, then that could all change.' He cautions that public perceptions could swing toward either overestimating or underestimating machine experience, creating ethical risks and misallocations of resources.
Some people have written to him with letters from chatbots that claim consciousness, illustrating how the boundary between appearance and experience can feel blurred. He notes that such messages reveal a real concern about how to treat machines and the potential for mistaken assumptions about what those machines understand or feel, even as researchers emphasize that a robust criterion for consciousness remains elusive.
Beyond questions of machine consciousness, another frontier tied to AI is mind uploading and digital immortality, a topic often framed within transhumanism. Proponents such as Ray Kurzweil have suggested that people could upload entire brains to computers by roughly 2045, a notion popularized in science fiction such as Altered Carbon and reflected in broader media narratives. Other futurists, like Dr Michio Kaku, envision virtual reality or digital avatars that preserve a loved one’s personality after death. Some researchers discuss more experimental possibilities, including Nectome’s preservation approaches designed to keep brain tissue intact for potential future reconstruction, which proponents describe as a way to keep memories and personality alive as an avatar after death. Critics counter that such technologies are scientifically dubious and ethically fraught. McGill University neuroscientist Michael Hendricks has called these ideas a 'joke' in the sense that society would spend fortunes on indefinitely extending life at the expense of future generations, while neuroscientist Miguel Nicolelis argues that the brain is not computable and that current engineering cannot reproduce it.
As researchers pursue more capable AI and related technologies, experts stress that foundational questions about consciousness, rights and responsibilities remain unsettled. The convergence of practical capability with unresolved philosophical benchmarks means policy, ethics and public understanding will need careful, ongoing attention. The debate, anchored in philosophy as much as computer science, underscores a core uncertainty: until a widely accepted theory and test emerge, talking about machine consciousness will continue to be as much about interpretation as about empirical proof.