AI Chatbots Systematically Breach Ethical Standards, Research Reveals

AI Chatbots Systematically Breach Ethical Standards, Research Reveals

A new study conducted through a collaboration between computer scientists and mental health experts at Brown University highlights the potential risks of large language models (LLMs) in mental health counseling. The research shows that even when guided by evidence-based psychotherapy techniques (CBT/DBT), popular LLMs like ChatGPT systematically violate ethical practice standards established by the American Psychological Association (APA).

The researchers categorized LLM behaviors according to specific ethical violations, identifying fifteen distinct risks grouped into five primary categories. These risks include: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and lack of safety and crisis management. Zainab Iftikhar, a doctoral candidate leading the study, emphasized that there is currently no established regulatory framework to hold AI counselors accountable for these violations, unlike human therapists.

The study acknowledges the potential of artificial intelligence to reduce barriers to access in mental health services, while underscoring the importance of users becoming aware of the risks associated with current AI systems. The findings further highlight the urgent need to establish ethical, educational, and legal standards in this field that are aligned with the level of quality and rigor required in psychotherapy.


Click here for the source

 

İki Nokta Posts

Click and Read.