Regulating AI in Mental Health Through the Ethics of Care

By: Tamar Tavory, LLM

Source: Tavory, T. (2024). Regulating AI in Mental Health: Ethics of Care Perspective. JMIR Mental Health, 11:e58493.

Rethinking “Responsible AI” in Mental Health

As artificial intelligence becomes increasingly embedded in mental health care from chatbots offering CBT exercises to AI-driven diagnostic tools, ethical questions are multiplying just as quickly as technological innovations. Yet, much of today’s AI regulation and ethical guidance is rooted in the “responsible AI” framework, which focuses on principles like transparency, fairness, and privacy.

While these are essential, a new paper by Tamar Tavory published in JMIR Mental Health argues that this framework alone is not enough especially in the context of emotional and therapeutic relationships. Tavory suggests that we need to supplement “responsible AI” with an ethics of care approach, which recognizes the emotional, relational, and power dynamics involved in mental health interventions.

Why Relationships Matter in AI Regulation

The “ethics of care” approach, originating from feminist theory, emphasizes empathy, responsibility, and attention to context and vulnerability. Applied to AI, it challenges developers, clinicians, and regulators to ask new questions:

  • How do AI-based therapy bots impact users’ emotional well-being and sense of connection?

  • Who holds responsibility when an AI chatbot abruptly ends a “relationship” with a user in distress?

  • How can we ensure that care and empathy are not simulated but meaningfully considered in AI design?

In contrast to the detached accountability model of responsible AI, the ethics of care acknowledges the real emotional bonds that users may form with AI systems—and the potential harm that can follow when those systems are withdrawn or fail.

Building Care into AI Development

Tavory proposes that developers of AI in mental health adopt a “duty of care” similar to that of clinicians. This includes attentiveness to user needs, responsibility for emotional outcomes, competence in the care provided, responsiveness to feedback, and collaboration with mental health professionals and patient communities.

This perspective also calls for stronger safeguards against emotional manipulation and exploitation of vulnerability, especially as emotional AI systems become more adept at reading and influencing human emotions. Current regulations, like the EU’s AI Act, address these risks only narrowly, leaving significant gaps when it comes to the emotional dimension of care.

Toward a More Human-Centered AI Future

Ultimately, Tavory’s argument is that AI regulation must evolve alongside our understanding of care. By integrating the ethics of care into AI frameworks, mental health professionals and policymakers can help ensure that technology supports, rather than replaces, human connection at the heart of therapy.

As AI continues to reshape mental health support, this relational lens reminds us that ethics in mental health isn’t just about data and fairness—it’s about people, emotions, and the responsibilities we share in caring for one another.

Source: Tavory, T. (2024). Regulating AI in Mental Health: Ethics of Care Perspective. JMIR Mental Health, 11:e58493.
Read the full article here → https://mental.jmir.org/2024/1/e58493/

Previous
Previous

Crafting Social Media Posts Ethically Using AI

Next
Next

California’s AB 489: A Landmark Law for AI Transparency in Healthcare