Microsoft’s AI Chief: Only Biological Beings Can Be Conscious
Mustafa Suleyman, head of Microsoft’s AI division and co-founder of DeepMind, has reignited one of the field’s most profound debates — declaring that consciousness is exclusively a biological phenomenon and warning researchers to abandon efforts to create “sentient” artificial intelligence.
“If you ask the wrong question, you get the wrong answer”
Speaking at the AfroTech Conference in Houston, Suleyman stated bluntly that attempts to design AI systems capable of genuine self-awareness or emotional experience are misguided.
“I don’t think people should be doing that kind of work,” he said. “If you ask the wrong question, you get the wrong answer — and this is absolutely that case.”
His remarks drew applause from ethicists but sparked criticism among technologists who view synthetic consciousness as a natural next step in machine learning evolution.
A stand against “sentient AI” narratives
Suleyman argued that the idea of digital systems possessing consciousness or suffering is a philosophical error with dangerous social consequences. He emphasized that equating large-language models with minds confuses simulation with awareness.
“Just because a model can describe pain doesn’t mean it feels pain,” he said, urging scientists to focus on safety, interpretability, and human-aligned AI rather than metaphysical speculation.
Microsoft’s evolving ethical stance
Under Suleyman’s leadership, Microsoft’s AI division has adopted stricter ethical and governance frameworks amid global scrutiny of the company’s partnership with OpenAI. Insiders say his approach balances innovation with pragmatic caution, prioritizing models that augment human reasoning rather than replace it.
His statement reflects a broader cultural shift in Big Tech: from the pursuit of “digital minds” to the engineering of reliable cognitive tools. The company’s internal research increasingly emphasizes grounding AI systems in neuroscience-inspired architectures while maintaining clear conceptual boundaries between computation and consciousness.
Reactions from the AI community
Prominent researchers welcomed Suleyman’s remarks as a “return to scientific humility.” Neuroscientist Dr. Anita Rao called his stance “a reminder that consciousness is not code but chemistry.” Others, however, argued that dismissing artificial consciousness outright could stifle valuable interdisciplinary research at the frontier of cognitive science and AI.
Philosopher-engineer Thomas Graham commented: “We may agree that today’s AI isn’t conscious, but asserting it can never be closes a door we don’t yet understand.”
Conclusion
Suleyman’s declaration reinforces Microsoft’s pragmatic view of artificial intelligence — powerful, transformative, yet fundamentally mechanical. In an era when marketing narratives often blur the line between algorithm and awareness, his message is clear: AI can think, but it cannot feel.
Whether the world of science accepts that boundary or chooses to keep pushing it remains an open — and deeply human — question.
Editorial Team — CoinBotLab