Is AI-Assisted Therapy Safe?
AI-powered chatbots stand out as digital support tools, particularly in contexts where access to professional psychological care is limited or costly, thanks to the anonymity and constant accessibility they provide. This trend reflects the formation of a new social space that highlights changing ways emotional needs are met in digital settings, rather than a replacement of human care.
From ELIZA in the 1960s to today’s advanced language models, the development of human–machine interaction has shifted from a purely technical exchange toward a more personal and emotionally resonant form of interaction. This tendency is associated with the phenomenon known as the “ELIZA Effect,” in which users attribute meaning and understanding to artificial intelligence; in this context, individuals may perceive chatbots as if they were capable of genuinely understanding them.
Experts emphasize that although AI systems can simulate empathy, they remain limited in clinical assessment, contextual judgment, and crisis intervention. In matters of mental health, risks such as misinformation and overreliance on automated systems are particularly concerning. Questions surrounding the data and values used to train algorithms also raise debates about cultural sensitivity and neutrality. Furthermore, privacy and data security concerns, combined with the possibility of users developing excessive emotional attachment to these systems, give rise to significant social risks rather than isolated technical problems.
AI-powered chatbots can play a complementary role in mental health, but they cannot replace professional therapy. Maintaining clear boundaries between digital support and expert intervention is critical for both individual safety and societal responsibility. It is emphasized that the presence of AI in the mental health field should be designed as a supplementary element of social policy rather than a necessity, and aligned with the broader social framework.