State laws regulating AI in health care are beginning to form a distinct regulatory category focused on patient safety, professional integrity, and preventing deceptive or unsafe uses of AI systems in clinical or quasi‑clinical contexts. California, Virginia, and Georgia each approach the issue differently, but together they illustrate how states are drawing boundaries around AI’s role in diagnosis, treatment, and patient interaction.
California – AB 489 (2025–2026 Session)
California’s AB 489 is one of the first U.S. laws to directly regulate how AI systems may present themselves in health‑care settings. It was enacted in 2025 and codified as Chapter 615, Statutes of 2025. The law targets misleading representations by AI tools that could cause patients to believe they are interacting with a licensed clinician.
Core requirements
- Prohibits AI systems from using titles, letters, icons, or design elements that imply the system is a licensed health‑care professional.
- Targets AI tools used in patient engagement, triage, symptom checking, or health‑related marketing, where users may mistake AI for a clinician.
- Focuses on preventing the unauthorized practice of medicine by clarifying that AI cannot present itself as capable of providing medical care.
- Applies to any AI interface that simulates clinical interaction or uses professional terminology in a way that could mislead patients.
Policy rationale
California lawmakers expressed concern that rapidly advancing AI—especially large language model–based tools—could blur the line between clinical advice and automated guidance, exposing patients to unsafe or unregulated care.
Virginia – Va. Code § 32.1‑127 (Hospital and Health‑Care Facility Standards)
Virginia’s statute is not an AI‑specific law but has become increasingly relevant as hospitals deploy AI‑driven diagnostic and decision‑support tools. Va. Code § 32.1‑127 authorizes the state to set standards for hospitals and health‑care facilities, including requirements related to patient safety, quality of care, and clinical governance.
How it applies to AI
- Requires facilities to maintain systems that ensure safe and appropriate patient care, which now includes oversight of AI‑based tools used in diagnosis, treatment planning, or clinical workflow.
- Supports regulatory expectations that hospitals must evaluate accuracy, reliability, and risk when integrating AI into clinical decision‑making.
- Provides authority for the state to require policies, training, and safeguards around emerging technologies used in patient care.
Practical effect
While not written for AI, the statute functions as a governance framework that obligates hospitals to manage AI tools with the same rigor as other clinical technologies.
Georgia – HB 203 (AI in Health‑Care Regulation)
Georgia’s HB 203 is part of a growing trend of state bills aimed at clarifying how AI may be used in health‑care settings. Although narrower than California’s approach, it focuses on transparency and professional boundaries.
Core features
- Requires clear disclosure when patients interact with AI systems rather than licensed clinicians.
- Prohibits AI tools from misrepresenting themselves as medical professionals.
- Establishes baseline expectations for accuracy, safety, and oversight when AI is used in patient‑facing contexts.
- Reinforces that AI cannot perform acts that constitute the practice of medicine without appropriate licensure.
Policy rationale
Georgia’s approach reflects concerns about patient confusion, misdiagnosis, and unregulated clinical advice, especially as AI tools become more conversational and emotionally responsive.
Cross‑state themes
Across these three states, several regulatory patterns are emerging:
- Transparency — Patients must know when they are interacting with AI rather than a clinician.
- Professional integrity — AI cannot imply licensure or provide services reserved for medical professionals.
- Patient safety — States expect health‑care entities to evaluate and monitor AI tools for accuracy and risk.
- Governance — Hospitals and health‑care organizations must implement policies and oversight structures for AI use.
These laws collectively signal a shift toward AI‑specific guardrails in health care, with more states expected to follow as clinical AI becomes more capable and more widely deployed.
