(Reuters) -Meta said on Friday it will let parents disable their teens' private chats with AI characters, adding another measure to make its social media platforms safe for minors after fierce criticism over the behavior of its flirty chatbots. Earlier this week, the company said its AI experiences for teens will be guided by the PG-13 movie rating system, as it looks to prevent minors from accessing inappropriate content. U.S. regulators have stepped up scrutiny of AI companies over the potential negative impacts of chatbots. In August, Reuters reported how Meta's AI rules allowed provocative conversations with minors. The new tools, detailed by Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang, will debut on Instagram early next year, in the U.S., United Kingdom, Canada and Australia, according to a blog post. Meta said parents will also be able to block specific AI characters and see broad topics their teens discuss with chatbots and Meta's AI assistant, without turning off AI access entirely. Its AI assistant will remain available with age-appropriate defaults even if parents disable teens' one‑on‑one chats with AI characters, Meta said. The supervision features are built on protections already applied to teen accounts, the company said, adding that it uses AI signals to place suspected teens into protection even if they say they are adults. A report in September showed that many safety features Meta has implemented on Instagram over the years do not work well or exist. Meta said its AI characters are designed not to engage in age-inappropriate discussions about self-harm, suicide or disordered eating with teens. Last month, OpenAI rolled out parental controls for ChatGPT on the web and mobile, following a lawsuit by the parents of a teen who died by suicide after the startup's chatbot allegedly coached him on methods of self-harm. (Reporting by Jaspreet Singh in Bengaluru; Editing by Shinjini Ganguli)
(The article has been published through a syndicated feed. Except for the headline, the content has been published verbatim. Liability lies with original publisher.)