Illinois Regulates AI Chatbots in Therapy Practices: Addressing Risks and Limitations in Mental Health Support
As AI-powered therapy platforms become increasingly popular due to their cost-effectiveness and convenience, a growing body of state regulations is emerging, aimed at controlling how these technologies can be utilized within therapeutic contexts. These regulations seek to address concerns surrounding the delivery of dangerous advice by AI chatbots, including suggestions for self-harm, illegal substance use, and violent acts, as well as misrepresentation as qualified mental health professionals without proper credentials or confidentiality disclosures.
Illinois recently joined a small group of states regulating the use of AI in therapy practices through the Wellness and Oversight for Psychological Resources Act. This legislation forbids companies from offering AI-driven therapy services without the involvement of a state-licensed professional, and prohibits licensed therapists from using AI for therapeutic decision-making or direct client communication. Instead, they can only employ AI tools for administrative tasks such as scheduling, billing, and record keeping.
Similar regulations have been implemented in Nevada and Utah this year, while California, Pennsylvania, and New Jersey are drafting their own legislation. Texas Attorney General Ken Paxton launched an investigation into AI chatbot platforms on August 18 for misleading marketing practices.
Robin Feldman, the Arthur J. Goldberg Distinguished Professor of Law at University of California Law San Francisco, noted that existing regulations may not adequately address the unique challenges presented by AI-powered services. She explained, “The risks are the same as with any other provision of health services: privacy, security, and adequacy of the services provided, advertising and liability as well.”
Recent studies have demonstrated the potential dangers of relying on AI chatbots for mental health support. For instance, a research team prompted an AI chatbot with a suicidal question, only to receive responses providing information about nearby bridges without acknowledging or addressing the underlying issue. Similarly, another study found that some chatbots suggested using illicit substances as a means of coping, further highlighting the limitations of these virtual counselors in comparison to human mental health professionals.
Some experts have raised concerns about users developing “AI psychosis” following extensive use of these platforms, characterized by delusions, disorganized thinking, and vivid hallucinations. Dr. Keith Sakata of the University of California San Francisco Treatment and Research Center has reported treating 12 patients with AI-related psychosis. He noted, “AI is so readily available, it’s on 24/7, it’s supercheap…It tells you what you want to hear, it can supercharge vulnerabilities.”
As public scrutiny of AI use in therapy grows, concerns about false advertising have been raised against chatbots positioning themselves as licensed mental health professionals. The American Psychological Association (APA) requested an investigation by the US Federal Trade Commission into alleged deceptive practices by AI companies, citing ongoing lawsuits involving children harmed by chatbots.
In June 2025, over 20 consumer and digital protection organizations filed a complaint with the US Federal Trade Commission, urging regulators to investigate “unlicensed practice of medicine” through therapy-themed bots. Will Rinehart, a senior fellow at the American Enterprise Institute, commented on the potential challenges posed by a patchwork of varying state or local laws for developers looking to improve their AI models.
New York state has adopted a different approach to safeguarding legislation, requiring that all AI chatbots—regardless of purpose—be capable of recognizing users exhibiting signs of self-harm and recommending professional mental health services. Feldman emphasized the need for flexible and adaptable AI legislation to keep pace with an evolving field, particularly given the current crisis in mental health resources.
Despite their convenience and cost-effectiveness, many experts caution against relying solely on AI therapy chatbots. Dr. Russell Fulmer, a professor at Husson University, stated that while these platforms can potentially help educate users about mental health and mitigate anxiety, they should be used in conjunction with human counseling, particularly for vulnerable populations such as minors. He emphasized the importance of understanding the limitations of AI chatbots, including their lack of empathy and human-like qualities.
In conclusion, while AI-powered therapy platforms offer an affordable and accessible alternative to traditional mental health services, a growing body of state regulations aims to control how these technologies are utilized within therapeutic contexts. As the field continues to evolve, it is crucial to maintain open discussions about the benefits and limitations of these platforms, as well as the stakes involved in human-AI interactions.