California’s AB 489: A Landmark Law for AI Transparency in Healthcare
California has taken a major step in shaping the future of ethical AI in healthcare. On October 11, 2025, Governor Rob Bonta signed Assembly Bill 489 (AB 489) — officially titled the Health Advice From Artificial Intelligence Act. This new legislation is one of the first in the nation to directly address how artificial intelligence (AI) can represent itself in medical and mental health contexts.
What the Law Says
AB 489 makes it illegal for AI systems to use or imply professional healthcare titles—such as “doctor,” “therapist,” or “psychologist”—in their names, advertising, or functionality, unless those services are directly overseen by a licensed human provider. Each instance of misuse counts as a separate violation and falls under the jurisdiction of the relevant California licensing board.
The law builds on earlier rules requiring that any AI-generated communication with patients (written or verbal) include a clear disclaimer stating that it was created by AI. These messages must also provide an easy way for patients to contact a human healthcare professional if they have questions or concerns.
Why It Matters for Mental Health Providers
For clinicians and therapists, this law isn’t just about compliance—it’s about protecting trust. AI is increasingly being used in therapy-related tools, from progress note generators to chatbots that simulate conversation. AB 489 draws a firm line: while these tools can assist in care, they cannot impersonate or replace a licensed human professional.
In the mental health field, where relational trust is central to healing, transparency is critical. Clients deserve to know when they’re interacting with technology versus a clinician. Misrepresentation—intentional or accidental—can undermine that trust and create legal risk for practices using AI.
A Step Toward Ethical AI Use
AB 489 sends a clear message: AI can assist, but it cannot pretend.
It reinforces the ethical principle that human oversight and accountability must remain at the core of all healthcare interactions. For organizations adopting AI tools, this is an opportunity to review how AI is being used in patient communication, documentation, and client-facing materials.
Even simple steps—like adding disclaimers to AI-generated summaries or confirming that your software partners comply with new state standards—can help ensure both ethical and legal alignment.
As more states consider similar legislation, California’s AB 489 may serve as a national model for protecting patients while promoting responsible AI innovation in healthcare.

