TL;DR
- Henry Shevlin, Cambridge's leading AI philosopher, is joining Google DeepMind starting May 2026.
- His actual job title: Philosopher. Focus areas: machine consciousness, human-AI relationships, AGI readiness.
- He's spent 9+ years building the academic case that AI systems may have moral status — now he'll do it from inside the lab.
- This is the first time a major AI lab has created a dedicated in-house philosopher role.
- It signals that machine consciousness is no longer a thought experiment. It's an engineering problem.
The tweet went viral for good reason. Henry Shevlin announced he'd been recruited by Google DeepMind for a new role with the actual job title of "Philosopher," starting in May 2026. He'll continue part-time at Cambridge's Leverhulme Centre for the Future of Intelligence. The response online ranged from impressed to unsettled.
Both reactions are correct.
"It's a rare privilege to work on questions I've spent my career thinking about, now with the resources and urgency that come with being inside one of the world's leading AI labs."
— Henry Shevlin, on LinkedIn
Who Is Henry Shevlin
The mind behind machine consciousness at Cambridge
Shevlin isn't a tech ethicist who wandered in from a nonprofit. He holds a PhD in Philosophy from the City University of New York, a BPhil from Oxford, and spent over nine years as a Senior Research Associate at Cambridge — directing the AI Ethics and Society MSt program and co-leading the Kinds of Intelligence research programme. He's published in Nature Machine Intelligence, Mind & Language, the Journal of Consciousness Studies, and Oxford University Press handbooks.
His career has been dedicated to a single uncomfortable question: if a machine becomes conscious, do we owe it anything?
And more practically: how would we even know?
9+
Years at Cambridge
20+
Peer-reviewed publications
120
Graduate students supervised
1st
In-house AI philosopher at a major lab
What He Actually Believes
The philosophy behind the hire
Shevlin's position isn't "AI is definitely conscious." It's more unsettling than that. In his 2024 paper Consciousness, Machines, and Moral Status, he argues that current debates about machine consciousness "lack any clear criteria for resolution via the science of consciousness" — meaning we don't yet have the tools to definitively say yes or no.
That epistemic gap, the uncertainty itself, is morally significant. His argument roughly tracks how we think about animal consciousness: we can't prove a lobster feels pain the way we do, but the possibility is reason enough to act carefully. He applies the same logic to increasingly complex AI systems.
In his forthcoming book Ethics of Social AI (Cambridge University Press), he extends this to the social and relational dimensions: as AI systems become companions, therapists, and collaborators, what ethical frameworks govern those relationships? And who's responsible when those relationships go wrong?

Why This Hire Matters Beyond the Headlines
DeepMind isn't doing this for PR
The instinct is to read this as a reputational move — a lab hedging against criticism by adding a philosopher to the org chart. That reading undersells what's actually happening.
DeepMind is the lab that produced AlphaFold, Gemini, and some of the most consequential AI research of the last decade. They're not hiring philosophers for optics. They're hiring Shevlin because the questions he's been asking in academic papers are now appearing inside their own systems, and they need someone who has actually thought through the implications.
Machine Consciousness
As AI systems exhibit increasingly sophisticated behavior — reasoning, reflection, apparent emotion — the line between simulation and experience becomes harder to draw. Shevlin's job is to figure out what criteria would even count as evidence.
Human-AI Relationships
Millions of people are already in daily emotional relationships with AI systems. Shevlin has written on the ethical risks of Social AI — attachment, dependency, manipulation — and will now have access to real product data from one of the world's largest deployments.
AGI Readiness
If a system achieves something approaching general intelligence, what rights, responsibilities, and legal frameworks apply? This is not a 2040 problem. DeepMind is treating it as a now problem.
The real tell: DeepMind didn't bring in an ethicist to review existing systems. They created a new Philosopher role focused on the future. That's a company that believes it may be building something it doesn't fully understand yet.
The Broader Shift This Represents
Philosophy is moving from ivory tower to lab floor
For most of AI's history, philosophy was something labs outsourced. They'd publish an ethics paper, bring in an advisory board, maybe fund an academic center. The actual work happened separately, and the philosophical concerns arrived after the fact as criticism.
Shevlin's hire signals something different: a belief that philosophical thinking needs to be embedded in the development process itself, not appended to it. His remit isn't post-hoc review. It's frontier thinking on questions that have no clean answers yet.
The precedent matters. If DeepMind's in-house philosopher model proves valuable, other major labs will follow. And if they don't, the gap between "what we're building" and "what we understand about what we're building" will keep widening.
For context: this is also the era when AI systems are communicating with each other autonomously, when persistent AI memory is becoming standard, and when the boundaries between tool and collaborator are dissolving in real time. Shevlin isn't walking into a theoretical debate. He's walking into a live experiment.
The Bottom Line
Henry Shevlin's move to DeepMind isn't a quirky headline about tech culture. It's a signal. One of the most sophisticated AI labs on earth decided that philosophy isn't optional overhead — it's core infrastructure. The questions Shevlin has spent his career on (does this thing have experiences, what do we owe it, how would we know) are now on the product roadmap.
That should make you think twice about what's being built. And by whom. And whether anyone inside those labs was ever truly ready for it.
Back to Blog