California’s New AI Rules for Therapy: What Mental Health Providers Need to Know
Artificial Intelligence (AI) is no longer futuristic—it’s already shaping how mental health care is delivered. From intake chatbots to progress-tracking tools and virtual assistants, AI is becoming part of the therapy experience. With this shift comes an urgent question: how should AI in therapy be regulated to protect clients and providers alike?
California is leading the way with two new bills—Senate Bill (SB) 579 and Assembly Bill (AB) 489—that directly address AI in mental health care. Together, they represent the first targeted attempt in the United States to create rules around how AI can and cannot be used in therapy.
Senate Bill 579: A Working Group on AI in Mental Health
SB 579 creates a state-level working group dedicated to evaluating AI in mental health care. This group will include mental health professionals, AI experts, patient advocates, legal specialists, and policymakers.
Key features of SB 579:
Launch Date: The working group must be formed by July 1, 2026.
Scope: Review how AI is being used in therapy—including diagnostic tools, therapeutic chatbots, and predictive systems.
Public Engagement: At least three public meetings are required to ensure transparency and stakeholder input.
Reporting Timeline:
Initial report due by July 1, 2028.
Final follow-up report due by January 1, 2030.
Sunset Clause: The bill is repealed on July 1, 2031, unless renewed.
This is a clear sign that California intends to study, regulate, and guide AI in therapy proactively, rather than reacting after problems arise.
Assembly Bill 489: Preventing AI Misrepresentation in Healthcare
AB 489 takes a different approach: it addresses trust and transparency. The bill prohibits AI and generative AI systems from falsely presenting themselves as licensed healthcare providers.
Key details:
AI tools cannot use professional titles such as “M.D.” or “psychologist” unless they are being used under the authority of an actual licensed provider.
Misuse or misrepresentation would be considered a separate violation for each incident, enforceable by the state’s professional licensing boards.
The goal is to protect patients from being misled into thinking an AI system is a human professional, especially in sensitive contexts like therapy.
While this doesn’t yet require therapists to disclose AI use in treatment planning, it sets a precedent for clear labeling and accountability.
Assembly Bill 489: Preventing AI Misrepresentation in Healthcare
AB 489 takes a different approach: it addresses trust and transparency. The bill prohibits AI and generative AI systems from falsely presenting themselves as licensed healthcare providers.
Key details:
AI tools cannot use professional titles such as “M.D.” or “psychologist” unless they are being used under the authority of an actual licensed provider.
Misuse or misrepresentation would be considered a separate violation for each incident, enforceable by the state’s professional licensing boards.
The goal is to protect patients from being misled into thinking an AI system is a human professional, especially in sensitive contexts like therapy.
While this doesn’t yet require therapists to disclose AI use in treatment planning, it sets a precedent for clear labeling and accountability.
Policy Corner: What May Come Next
The California Telehealth Resource Center (CalTRC) highlights that these bills are only the beginning. Potential future requirements under discussion include:
Disclosure obligations for therapists when AI is used in care.
Ethical frameworks to define when AI use is appropriate in therapy.
Training requirements to ensure providers understand AI limitations.
Reporting obligations when AI contributes to treatment recommendations.
These developments suggest California is moving toward a comprehensive regulatory framework for therapeutic AI.
Why This Matters for Mental Health Providers
These bills are the first attempt to legislate AI in therapy directly, and they carry important implications for mental health professionals.
Transparency and trust. Even before formal disclosure rules arrive, patients will expect to know whether AI plays a role in their care.
Accountability. Misrepresentation of AI (intentionally or unintentionally) could lead to regulatory or legal consequences.
Shaping the future. Providers who engage with these changes early can help shape ethical standards and best practices for the field.
What This Means for Clinical Practice
Even if you are not currently using AI tools in your practice, regulation is coming. Here are practical steps you can take now:
Audit Your Tools
Make a list of any digital systems in use—progress tracking apps, intake forms, chatbots, or treatment planning software. Identify where AI or machine learning may be involved.Plan for Transparency
Even if not required yet, start drafting simple disclosure language for clients. Example:
“Our practice uses digital tools that may incorporate artificial intelligence to support treatment planning. All care decisions are made by your licensed provider.”Update Consent Forms
Consider adding a section to your intake paperwork about technology and AI use. This gets ahead of possible disclosure requirements.Train Your Team
Ensure everyone in your practice understands how AI tools are being used, their benefits, and their limitations. This avoids accidental misrepresentation.Monitor Legislation
SB 579 and AB 489 are just the start. Staying informed will help you stay compliant and position your practice as a leader in ethical AI adoption.
Stay tuned—we’ll keep updating this blog as new AI regulations for therapy develop in California. To explore practical tools that can help you use AI responsibly in your practice, join our newsletter.