ChatGPT developer OpenAI has started to include watermarks on images generated by its DALL·E 3 artificial intelligence model, according to a company update.
OpenAI is using the Coalition for Content Provenance and Authenticity (C2PA) technical standard, which allows publishers, companies and others to embed metadata in media to verify its origin and related information, on images generated via its website and application programming interfaces (APIs).
The change will roll out to all mobile users by 12 February.
The decision comes after Facebook (NASDAQ:META) parent Meta Platforms Inc (NASDAQ:META, ETR:FB2A, SWX:FB) introduced a new flagging system that will label photorealistic AI-generated images with a disclosure that they were "imagined with AI”.
Meta is implementing the flagging system in response to the proliferation of AI-generated images and videos designed to manipulate users.
The issue came to a front recently when fake, compromising images of pop megastar Taylor Swift began spreading across the Twitter/X social media platform.
Earlier this year, more than 100 deepfake news videos were uncovered on Facebook, making false accusations against UK Prime Minister Rishi Sunak.
Meta’s president of global affairs Nick Clegg disclosed that Meta can detect invisible watermarks added by developers on AI-generated content.
However, OpenAI admitted that C2PA “is not a silver bullet to address issues of provenance”.
“It can easily be removed either accidentally or intentionally. For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it.
“Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.”
The C2PAn standard was founded by Adobe (NASDAQ:ADBE), the New York Times and Twitter in 2019 to promote an industry standard for provenance metadata.