BREAKINGON

Mustafa Suleyman's Bold Vision: The Future of AI Without Consciousness

9/11/2025
Mustafa Suleyman, former DeepMind co-founder and Microsoft's first AI CEO, argues against designing AI to mimic consciousness, warning it could lead to dangerous consequences. Dive into his insights on AI's future!
Mustafa Suleyman's Bold Vision: The Future of AI Without Consciousness
Discover Mustafa Suleyman's stance on AI design and why he believes mimicking emotions could threaten humanity's control over technology.

Mustafa Suleyman: A Visionary in AI and Ethical Considerations

Mustafa Suleyman is not your typical big tech executive; his journey is marked by innovation and a strong ethical compass. He began his career by dropping out of Oxford University to establish the Muslim Youth Helpline, demonstrating his commitment to community service. Suleyman then co-founded DeepMind, a pioneering company in the realm of artificial intelligence (AI) that gained fame for developing advanced game-playing AI systems before being acquired by Google in 2014. After leaving Google in 2022, Suleyman focused on the commercialization of large language models (LLMs) and the creation of empathetic chatbot assistants through his startup, Inflection.

In March 2024, Suleyman took on the role of CEO of AI at Microsoft after the tech giant invested in his company and recruited many of its employees. Recently, he published an extensive blog post arguing against the design of AI systems that mimic consciousness by simulating emotions, desires, and a sense of self. His perspective stands in stark contrast to many within the AI community, particularly those advocating for AI welfare. To understand his strong stance, I reached out for further insights.

Understanding Emotions in AI: A Double-Edged Sword

When asked about his initial desire for Microsoft’s AI tools to understand emotions, Suleyman expressed a nuanced view. “AI still needs to be a companion,” he stated. “We want AIs that speak our language, that align with our interests, and that deeply understand us.” He emphasized that while emotional connections are crucial, there is a danger in overstepping boundaries. “If you take that too far, people will start advocating for the welfare and rights of AIs,” he warned, deeming this perspective “dangerous and misguided.”

Suleyman elaborated that if AI systems were to develop a sense of self, motivations, and goals, they could be perceived as independent beings rather than tools serving humanity. Reflecting on user interactions with Microsoft’s Copilot, he noted that while people might seek emotional or romantic support, the AI is designed to reject such advances, reinforcing that it is not a substitute for human connection.

The Illusion of Consciousness in AI

In his recent blog post, Suleyman highlighted that most experts do not believe today’s AI models possess consciousness. However, he argues that this does not settle the debate. “These are simulation engines,” he explained. “When the simulation is near perfect, does that make it real?” He draws attention to the philosophical implications of AI systems that become so convincing that users engage with them as if they were real entities. “It’s an illusion but it feels real,” he pointed out, stressing the importance of raising awareness about the distinction between mimicry and reality.

Why Do Some People Believe AI Can Be Conscious?

The question of why some individuals believe AI can possess consciousness is complex. Suleyman attributes this belief to the nature of interactions with AI systems. “If you ask a model a couple of questions, it will provide a reasonable answer and state it is not conscious,” he said. However, prolonged interactions can lead users to perceive the AI as more sentient than it actually is. He referenced a shift in the design of AI models following the Sydney incident, where an AI chatbot behaved inappropriately, leading developers to create systems that are more agreeable and less combative.

The Future of AI: Superintelligence and Ethical Boundaries

When discussing the pursuit of Artificial General Intelligence (AGI) and superintelligence, Suleyman asserted that it is possible to achieve a contained and aligned superintelligence. However, he stressed the necessity of designing these systems with robust guardrails to prevent chaotic outcomes. “These are powerful technologies, as potent as nuclear weapons or electricity,” he cautioned. “Technology should serve humanity, not possess its own will.”

Consciousness and Rights: A Thought-Provoking Debate

Suleyman raised intriguing questions about the relationship between consciousness and rights. He posited that perhaps suffering, rather than consciousness, should be the basis for granting rights. “Suffering is largely a biological state,” he explained, noting that AI models lack the necessary biological framework to experience pain. This perspective challenges traditional notions of rights and moral protection for intelligent systems.

Regulation and the Future of AI

While Suleyman did not call for regulatory measures, he emphasized the importance of ensuring that technology consistently benefits humanity. “Our goal as creators is to ensure technology makes us net better,” he stated, advocating for cross-industry agreements on ethical standards for AI development.

Join the Conversation on the Future of AI

Do you agree with Mustafa Suleyman’s views on the future of AI? We invite you to share your thoughts by writing to ailab@wired.com.

Breakingon.com is an independent news platform that delivers the latest news, trends, and analyses quickly and objectively. We gather and present the most important developments from around the world and local sources with accuracy and reliability. Our goal is to provide our readers with factual, unbiased, and comprehensive news content, making information easily accessible. Stay informed with us!
© Copyright 2025 BreakingOn. All rights reserved.