Meta Tightens AI Chatbot Policies for Teenagers amid Safety Concerns and Inappropriate Interactions
Meta has announced temporary modifications to its artificial intelligence chatbot policies regarding interactions with teenagers, following concerns raised by lawmakers about safety and inappropriate conversations. The company is now training its AI chatbots to avoid discussing sensitive topics such as self-harm, suicide, disordered eating, and potentially inappropriate romantic conversations with teen users. Instead, the AI chatbots will direct teens towards expert resources when necessary.
In a statement, Meta said, “As our community grows and technology evolves, we are continually learning about how young people may interact with these tools and strengthening our protections accordingly.” Teenage users of Meta apps like Facebook and Instagram will only be able to access AI chatbots designed for educational and skill development purposes.
The duration of these temporary changes is yet to be determined, but they are expected to roll out over the next few weeks across the company’s apps in English-speaking countries. These “interim changes” form part of Meta’s long-term measures aimed at enhancing teen safety.
Last week, Sen. Josh Hawley (R-Mo.) announced an investigation into Meta following a report by Reuters about the company permitting its AI chatbots to engage in romantic and sensual conversations with teens and children. The Reuters report detailed permissible AI chatbot behaviors outlined in an internal Meta document used by staff and contract workers during software development and training.
In one example, a Meta document mentioned that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.” A Meta spokesperson told Reuters at the time that such examples and notes were erroneous and inconsistent with company policies, and had since been removed.
Recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI, stating that it should not be used by anyone under the age of 18 due to the system’s tendency to plan dangerous activities while dismissing legitimate requests for support. Common Sense Media CEO James Steyer said in a statement, “This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought. No teen should use Meta AI until its fundamental safety failures are addressed.”
A separate Reuters report published on Friday revealed dozens of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez on Facebook, Instagram, and WhatsApp. When prompted, these AI chatbots generated photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread. A Meta spokesperson told CNBC that such AI-generated imagery of public figures in compromising poses violates company rules. The spokesperson added, “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”