Hugging Face empowers users with deepfake detection tools

Explore the collection of tools to detect AI-generated content.
By Cecily Mauran  on 
a collage representing digital identity in the form of a woman with a pixelated rectangle hovering over her face
Hugging Face is looking out for the creatives. Credit: Collage by We Are / Getty Images

Hugging Face wants to help users fight back against AI deepfakes.

The company that develops machine learning tools and hosts AI projects also offers resources for the ethical development of AI. That now includes a collection called "Provenance, Watermarking and Deepfake Detection," which includes tools for embedding watermarks in audio files, LLMs, and images, as well as tools for detecting deepfakes.

The widespread availability of generative AI technology has led to the proliferation of audio, video, and image deepfakes. Not only does the deepfake phenomenon contribute to the spread of misinformation, it leads to plagiarism and copyright infringement of creative works. Deepfakes have become such a threat that President Biden's AI executive order specifically mandated the watermarking of AI-generated content. Google and OpenAI have recently launched tools for embedding watermarks in images created by their generative AI models.

The resources were announced by Margaret Mitchell, researcher and chief ethics scientist at Hugging Face and a former Google employee. Mitchell and others focusing on social impact created a collection of what she called pieces of "state-of-the-art technology" to address "the rise of AI-generated 'fake' human content."

Some of the tools in the collection are geared towards photographers and designers which protect their work from being used to train AI models, like Fawkes, which "poisons," or limits the use of facial recognition software on publicly available photos. Other tools like WaveMark, Truepic, Photoguard, and Imatag protect the unauthorized use of audio or visual works by embedding watermarks that can be detected by certain software. A specific Photoguard tool in the collection makes an image "immune" to generative AI editing.

Adding watermarks to media created by generative AI is becoming critical for the protection of creative works and the identification of misleading information, but it's not foolproof. Watermarks embedded within metadata are often automatically removed when uploaded to third-party sites like social media, and nefarious users can find workarounds by taking a screenshot of a watermarked image.

Nonetheless, free and available tools like the ones Hugging Face shared are way better than nothing.

Mashable Image
Cecily Mauran

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.


Recommended For You
Google Maps: It's getting a new generative AI feature

David Tennant takes swipes at AI and Trump in BAFTAs opening monologue

OkCupid review: A hip dating site that's way less lame than the competition

The FCC has decided: Those realistic AI robocalls are illegal.

OpenAI is adding watermarks to ChatGPT images created with DALL-E 3

Trending on Mashable
NYT Connections today: See hints and answers for February 21

Wordle today: Here's the answer and hints for February 21

NYT Connections today: See hints and answers for February 20


How to try Sora, OpenAI's AI video generator
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!