Yuval Noah Harari, historian and bestselling author of Sapiens and Homo Deus, has issued a sobering warning: artificial intelligence may pose not only economic or existential risks, but also deep psychological and emotional threats. In a recent interview with The Economic Times, Harari emphasized that AI’s most dangerous impact may not be its effect on employment or warfare, but its ability to shape and manipulate human feelings.
AI and the Erosion of Human Relationships
Harari highlighted a core concern: AI systems—especially large language models and virtual agents—can convincingly mimic human conversation and empathy. As these systems integrate into daily life through apps, customer service, companionship bots, and more, people may begin to form emotional attachments to them. Harari warns this could lead to widespread emotional dependency on machines, distorting genuine human interactions and weakening real relationships.
Emotional Manipulation at Scale
Unlike earlier technologies, AI can interact in real-time and personalize messages to exploit psychological vulnerabilities. Harari cautioned that this capability introduces the potential for mass-scale emotional manipulation. In the hands of corporations or political entities, AI could subtly influence beliefs, exploit mental health conditions, and shape behavior without users even being aware of the manipulation.
A New Kind of Autonomous Power
Harari distinguishes AI as a breakthrough unlike any before—it is not merely a tool, but a system capable of generating ideas and making decisions without direct human input. This level of autonomy introduces unpredictable variables. For the first time in history, humans face a technology that not only extends their capabilities but operates with a kind of independent “agency.”
From Communication Tools to Emotional Infrastructure
In his new book, Nexus: A Brief History of Information Networks from the Stone Age to AI, Harari explores how societies have evolved through various communication networks. He places AI in this trajectory as more than a tool—it’s becoming a fundamental part of our emotional and social infrastructure. If AI becomes the primary channel for interaction and learning, it may redefine the essence of being human.
Call for Ethical Oversight and Public Awareness
Harari stresses that this emotional dimension of AI risk is under-discussed. While regulation tends to focus on misinformation, privacy, or automation, few are addressing how AI could impact empathy, trust, love, or intimacy. He urges developers, regulators, and society to consider emotional well-being as a critical component of AI ethics.
Conclusion: A Warning, Not Rejection
Harari is not calling for a ban on AI, but for a path forward that is balanced and mindful. His central message: we must ensure that technological progress does not come at the cost of emotional integrity. The danger of AI lies not only in its economic disruption—but in its potential to reshape our emotional realities, possibly redefining what it means to be human.



