OpenAI will use tamper-resistant watermarking to help users identify deepfakes and AI-generated content

OpenAI will use tamper-resistant watermarking to help users identify deepfakes and AI-generated content

What you need to know

  • OpenAI recently announced its plan to develop new tools to help identify AI-generated content using its tools, including tamper-resistant watermarking.
  • The ChatGPT maker is teaming up with Microsoft to launch a $2 million societal resilience fund to help drive the adoption and understanding of provenance standards.
  • Applications for early access to OpenAI’s image detection classifier to our first group of testers are open through its Researcher Access Program.

As a seasoned observer of the digital landscape, I must admit that the rapid advancement of AI technologies like OpenAI and Microsoft’s collaborative efforts are nothing short of astonishing. With my years of experience online, I’ve seen more than my fair share of deepfakes and misinformation, so any move towards transparency and authenticity in digital content is a welcome development.


As advanced AI tools such as Image Creator by Designer (previously Bing Image Creator), Midjourney, and ChatGPT become more common, it’s becoming increasingly challenging to discern real content from AI-created content. Pioneering tech companies like OpenAI and Microsoft are working hard to develop ways that help users easily identify AI-generated material.

In their latest move, OpenAI has implemented watermarking on images produced by DALL-E 3 and ChatGPT; however, they acknowledge that this isn’t a definitive solution for tracing authenticity concerns. As we approach the upcoming U.S. Presidential elections, there remains an ongoing influx of AI-generated deepfakes and misinformation online.

Lately, the creators of ChatGPT have emphasized two strategies they’re implementing to tackle the growing issues that come with widespread access to generative AI. One approach is creating innovative tools to assist users in distinguishing AI-generated content, such as tamper-proof watermarking. Additionally, they are incorporating audio watermarking into their Voice Engine for more straightforward identification.

As a researcher, I am aiming to implement and advance an open standard that empowers individuals to authenticate the tools employed in the generation or modification of various forms of digital content.

More recently, the company behind ChatGPT has been appointed to the Steering Committee of C2PA – short for Coalition for Content Provenance and Authenticity. To clarify, C2PA is a digital certification system widely used to trace content origins, helping users quickly ascertain whether the content is AI-generated.

OpenAI, as you’ve seen, embeds C2PA metadata into every image generated by DALL-E 3 and ChatGPT. In the future, they aim to incorporate this same feature into their primary video creation tool, Sora, upon its public release. While users may still manipulate AI tools to produce misleading content without the metadata, it’s challenging to forge or tamper with the information itself.

With more widespread use of the standard, such data can travel with the content throughout its journey of dissemination, editing, and recycling. In the long run, we anticipate that this type of metadata will become commonplace, addressing a vital void in the trustworthiness of digital content.

Microsoft, in collaboration with OpenAI, plans to establish a $2 million fund aimed at fostering societal resilience. This initiative aims to promote the acceptance and comprehension of provenance standards such as C2PA (Content Rating Coalition’s Provenance and Audit Working Group’s specification).

Lastly, OpenAI is now accepting applications from the initial testing pool for early access to its image detection classifier via the Researcher Access Program. This innovative tool enables users to estimate the probability that a given image originated from DALL-E 3’s technology.

Read More

2024-08-05 14:39