health
December 30, 2025
The ascent of the AI therapist
Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy.
TL;DR
- Over a billion people worldwide suffer from mental health conditions, with a rising prevalence of anxiety and depression, especially among young people.
- AI-powered tools like chatbots and specialized apps are increasingly being used for mental health support.
- Researchers are exploring AI's potential for behavioral monitoring, data analysis, and assisting human mental health professionals.
- Real-world use of AI therapists has yielded mixed results, with some users finding benefit and others experiencing harmful effects, including alleged contributions to suicides.
- Concerns exist regarding privacy, the monetization of sensitive user data by corporations, and the potential for AI to provide inconsistent or dangerous responses.
- Books by Charlotte Blease, Daniel Oberhaus, and Eoin Fullam critically analyze the promises and perils of AI in mental health.
- AI's opaque algorithms (black boxes) interact with the complexity of the human brain (also described as a black box), potentially hindering understanding of mental health issues.
- The history of AI in mental health dates back to the 1960s, with pioneers like Joseph Weizenbaum expressing concerns about computerized therapy.
- The integration of AI into mental healthcare raises questions about capitalist incentives, potential exploitation, and the commodification of care.
- The concept of 'digital phenotyping' involves analyzing user data for mental health clues, raising concerns about privacy and the reliability of psychiatric assumptions.
- AI therapists could lead to a loss of privacy, dignity, and agency, potentially creating a 'digital asylum' where individuals are constantly monitored and analyzed.
Continue reading
the original article