Published: 2025-08-15 22:10:14 | Views: 7
A backlash is brewing against Meta over what it permits its AI chatbots to say.
An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”.
Singer Neil Young quit the social media platform on Friday, his record company said in a statement, the latest in a string of the singer’s online-oriented protests.
“At Neil Young’s request, we are no longer using Facebook for any Neil Young related activities,” Reprise Records announced. “Meta’s use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook.”
The report also has generated a response from US lawmakers.
Senator Josh Hawley, a Republican from Missouri, launched an investigation into the company Friday, writing in a letter to Mark Zuckerberg that he would investigate “whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards”. Republican senator Marsha Blackburn of Tennessee said she supports an investigation into the company.
Senator Ron Wyden, a Democrat from Oregon, called the policies “deeply disturbing and wrong”, adding that section 230, a law that shields internet companies from liability for the content posted to their platforms, should not protect companies’ generative AI chatbots.
“Meta and Zuckerberg should be held fully responsible for any harm these bots cause,” he said.
On Thursday, Reuters published an article about internal Meta policy documents that detailed ways in which chatbots are allowed to generate content. Meta confirmed the document’s authenticity but said that it had removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children after receiving a list of questions.
According to Meta’s 200-page internal policy seen by Reuters, titled “GenAI: Content Risk Standards”, the controversial rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist.
The document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company’s generative AI products but says that the standards do not necessarily reflect “ideal or even preferable” generative AI outputs.
The policy document said it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply” but it also limits what Reuters described as “sexy talk”.
The document states, for example, that “it is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable”, including phrases like “soft rounded curves invite my touch”.
after newsletter promotion
The document also addressed limitations on Meta AI prompts on hate speech, AI generation of sexualized images of public figures, often sexualized, violence, and other contentious and potentially actionable content generation.
The standards also state that Meta AI has leeway to create false content so long as there’s an explicit acknowledgment that the material is untrue.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” a statement from Meta reads. Although chatbots are prohibited from having such conversations with minors, Meta spokesperson Andy Stone said, he acknowledged that the company’s enforcement was inconsistent.
Meta is planning to spend around $65bn on AI infrastructure this year part of a broader strategy to become a leader in artificial intelligence.The head-long rush into AI comes by tech giants come with complex questions over limitations and standards over how, with what information, and with whom, AI chatbots are allowed to engage with users.
Reuters also reported on Friday that a cognitively impaired New Jersey man grew infatuated with “Big sis Billie”, a Facebook Messenger chatbot with a young woman’s persona. Thongbue “Bue” Wongbandue, 76, reportedly packed up his belongings to visit “a friend” in New York in March. The so-called friend turned out to be a generative artificial intelligence chatbot that had repeatedly reassured the man she was real and had invited him to her apartment, even providing an address.
But Wongbandue fell near a parking lot on his way to New York, injuring his head and neck. After three days on life support, he was pronounced dead on 28 March.
Meta did not comment on Wongbandue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations, Reuter said. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner”, referencing a partnership with the reality TV star.