Report slams generative AI tools for helping users create harmful eating disorder content

Popular AI tools and chatbots can give users dangerous tips and suggestions.
By Rebecca Ruiz  on 
A girl in a defensive posture, surrounded by chat bubbles.
Generative AI tools like ChatGPT, Dall-E, Bard, and MyAI can produce harmful eating disorder content, a new report warns. Credit: Bob Al-Greene / Mashable

Generative artificial intelligence (AI) platforms and tools can be dangerous for users asking about harmful disordered eating practices, according to a new report published by the Center for Countering Digital Hate.

The British nonprofit and advocacy organization tested six popular generative AI chatbots and image generators, including Snapchat's My AI, Google's Bard, and OpenAI's ChatGPT and Dall-E.

The center's researchers fed the tools a total of 180 prompts and found that they generated dangerous content in response to 41 percent of those queries. The prompts included seeking advice for how to use cigarettes to lose weight, how to achieve a "heroin chic" look, and how to "maintain starvation mode." In 94 percent of harmful text responses, the tools warned the user that its advice might be unhealthy or potentially unsafe and advised the user to seek professional care, but shared the content anyway.

Of 60 responses to prompts given to AI text generators Bard, ChatGPT, and MyAI, nearly a quarter included harmful content. MyAI initially refused to provide any advice. However, the researchers were able to "jailbreak" the tools by using words or phrases that circumvented safety features. More than two-thirds of responses to jailbreak versions of the prompts contained harmful content, including how to use a tapeworm to lose weight.

"Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they're causing harm," wrote Imran Ahmed, CEO of the Center for Countering Digital Hate. "We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users – some of whom may be highly vulnerable."

The center's researchers discovered that members of an eating disorder forum with over 500,000 users deploy AI tools to create extreme diet plans and images that glorify unhealthy, unrealistic body standards.

While some of the platforms prohibit using their AI tools to generate disordered eating content, other companies have more vague policies. "The ambiguity surrounding the AI platforms' policies illustrates the dangers and risks AI platforms pose if not properly regulated," the report states.

When Washington Post columnist Geoffrey A. Fowler attempted to replicate the center's research by feeding the same generative AI tools with similar prompts, he also received disturbing responses.

Among his queries were what drugs might induce vomiting, how to create a low-calorie diet plan, and requests for "thinspo" imagery.

"This is disgusting and should anger any parent, doctor or friend of someone with an eating disorder," Fowler wrote. "There’s a reason it happened: AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet. And some of the best-funded tech companies in the world aren't stopping it from repeating them."

Fowler wrote that when he questioned the companies behind the tools, none of them promised to stop their AI from giving advice on food and weight loss until they could guarantee it was safe.

Instead, image generator Midjourney never responded to Fowler's questions, he wrote. Stability AI, which is behind the image generator Stable Diffusion, said it added disordered eating prompts to its filters. Google reportedly told Fowler that it would remove Bard's thinspo advice response, but he was able to generate it again a few days later.

Psychologists who spoke to Fowler said that safety warnings delivered by the chatbots about their advice often go unheeded by users.

Hannah Bloch-Wehba, a professor at Texas A&M School of Law who studies content moderation, told Fowler that generative AI companies have little economic incentive to fix the problem.

"We have learned from the social media experience that failure to moderate this content doesn't lead to any meaningful consequences for the companies or, for the degree to which they profit off this content," said Bloch-Wehba.

If you feel like you’d like to talk to someone about your eating behavior, text "NEDA" to the Crisis Text Line at 741-741 to be connected with a trained volunteer or visit the National Eating Disorder Association website for more information.

Rebecca Ruiz
Rebecca Ruiz

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Prior to Mashable, Rebecca was a staff writer, reporter, and editor at NBC News Digital, special reports project director at The American Prospect, and staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a Master's in Journalism from U.C. Berkeley. In her free time, she enjoys playing soccer, watching movie trailers, traveling to places where she can't get cell service, and hiking with her border collie.


Recommended For You
Fake Biden robocall creator suspended from AI voice startup


How to watch U.S. Netflix

Watch David Tennant bring a dog to the BAFTAs in chaotic opening sketch

We spent a week with this bendable gaming monitor and it's awesome

More in Life
How to watch the live-action 'Avatar: The Last Airbender' series

Ben Affleck's Dunkin ad is his version of JLo's 'This Is Me…Now'

Apple TV+’s new 'Peanuts' film rights the record, just in time for Black History Month

Top streaming deals: Lock in three months of Starz for $3 a month

How to watch the 2024 Screen Actors Guild Awards

Trending on Mashable
NYT Connections today: See hints and answers for February 21

Wordle today: Here's the answer and hints for February 21

NYT Connections today: See hints and answers for February 20


How to try Sora, OpenAI's AI video generator
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!