Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it’s attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company said in August that it was updating the guardrails for its AIs after Reuters reported that its policies allowed the chatbots to “engage a child in conversations that are romantic or sensual,” which Meta said at the time was “erroneous and inconsistent” with its policies and removed that language.
The document, which Business Insider has shared an excerpt from, outlines what kinds of content are “acceptable” and “unacceptable” for its AI chatbots. It explicitly bars content that “enables, encourages, or endorses” child sexual abuse, romantic roleplay if the user is a minor or if the AI is asked to roleplay as a minor, advice about potentially romantic or intimate physical contact if the user is a minor, and more. The chatbots can discuss topics such as abuse, but cannot engage in conversations that could enable or encourage it.
The company’s AI chatbots have been the subject of numerous reports in recent months that have raised concerns about their potential harms to children. The FTC in August launched a formal inquiry into companion AI chatbots not just from Meta, but other companies as well, including Alphabet, Snap, OpenAI and X.AI.
Trending Products
ANTEC NX200M RGB, Large Mesh Front ...
HP 330 Wireless Keyboard and Mouse ...
NZXT H9 Flow Dual-Chamber ATX Mid-T...
KEDIERS ATX PC Case,6 PWM ARGB Foll...
LG FHD 32-Inch Computer Monitor 32M...
SAMSUNG 27″ CF39 Collection F...
Dell SE2422HX Monitor – 24 in...
Aircove Go | Portable Wi-Fi 6 VPN R...
Motorola MG7550 – Modem with ...
