AI remedy chatbots are the goal of accelerating state-level legislative bans, with Maine sending a prohibition invoice to the governor on April 10 and Missouri shifting an analogous measure by an omnibus well being care invoice.
Abstract
- Maine’s LD 2082 would prohibit medical use of AI in psychological well being remedy whereas permitting administrative functions.
- Missouri’s HB 2372 would ban AI from remedy, psychotherapy, and psychological well being prognosis, with a $10,000 first-violation penalty.
- The laws displays a rising state-level consensus that AI shouldn’t exchange licensed human therapists in medical settings.
Two US states moved this week to formally prohibit or ban the medical use of AI in psychological well being remedy, reflecting a surge in legislative exercise concentrating on remedy chatbots that has picked up important pace in 2026. The actions in Maine and Missouri are the clearest examples but of how states are shifting sooner than the federal authorities on AI psychological well being regulation.
Maine’s LD 2082 was despatched to the governor on April 10. The invoice would prohibit the medical use of AI in psychological well being remedy whereas permitting it in purely administrative roles. Missouri’s HB 2372 goes additional, masking remedy providers, psychotherapy providers, and psychological well being diagnoses, with a $10,000 penalty for first violations enforced by the state Legal professional Common, based on the Transparency Coalition.
The excellence each payments draw, between medical remedy and administrative help, displays a legislative strategy that goals to protect AI’s effectivity advantages in healthcare whereas drawing a agency line in opposition to AI changing licensed medical judgment in therapeutic settings.
Why States Are Performing Now
The surge in state-level AI regulation is pushed partly by the speedy proliferation of economic remedy chatbot merchandise marketed on to customers, a few of which have been deployed in medical or clinical-adjacent settings with out the identical oversight utilized to human practitioners. Critics say these merchandise have been reaching susceptible individuals whereas regulatory frameworks remained largely silent.
As crypto.information reported, AI is now being embedded throughout authorities businesses in delicate analytical roles, creating strain on policymakers at each stage to outline the place AI can and can’t substitute for human judgment. The remedy chatbot bans are a direct legislative reply to that strain in a healthcare context.
The Broader AI Regulation Development
The remedy chatbot bans are a part of a wider legislative wave. Greater than 10 anti-prediction market payments have been launched in Congress since January 2026, and state legislatures throughout the nation have filed dozens of AI-focused measures concentrating on totally different sectors.
As crypto.information famous, the federal authorities is concurrently accelerating AI adoption and preventing authorized battles over the place AI authority begins and ends. States look like filling the vacuum, passing binding restrictions on particular high-risk functions whereas Washington debates broader frameworks.


