For many years, webmasters have utilized a robots.txt file to specify which crawlers are allowed or denied access to their sites. In a similar vein, Adobe aims to establish a new standard for image content by launching a tool integrated into its content credentials system. This initiative is designed to give creators more control over how their images are used, particularly in the context of training AI models.
Content credentials refer to the metadata embedded within media files that identifies their authenticity and ownership. This approach aligns with the Coalition for Content Provenance and Authenticity (C2PA), a standard that advocates for content authenticity. Adobe's newly launched web tool allows creators to attach these credentials to their images, even if they were not created or modified using Adobe's own software.
The newly introduced Adobe Content Authenticity App empowers users to attach their credentials—such as their names and social media accounts—to image files. The app supports bulk uploads, allowing users to attach credentials to as many as 50 JPG or PNG files at once. In a strategic partnership with LinkedIn, Adobe leverages the platform’s verification program, ensuring that the individual attaching credentials has a verified name on LinkedIn. Although users can link their Instagram or X profiles, these platforms do not offer a similar verification integration.
A standout feature of the Adobe Content Authenticity App is the option for users to indicate that their images should not be used for AI model training. While this feature is present in the app and reflected in the image’s metadata, Adobe has yet to formalize any agreements with AI model developers to adopt this standard. The company is currently in discussions with leading AI firms to encourage adherence to this initiative.
Adobe's efforts to create a recognizable indicator for model makers regarding AI training data are commendable; however, the success of this initiative hinges on the willingness of AI companies to respect the standard. This challenge is underscored by past incidents, such as Meta's implementation of auto-tagging images with a “Made with AI” label, which faced backlash from photographers.
According to Andy Parson, Senior Director of the Content Authenticity Initiative at Adobe, this app was developed with input from content creators. He noted the diverse and fragmented regulations surrounding copyright and AI training data globally. As such, Adobe aims to provide creators with a straightforward means to express their preferences regarding AI platforms. Parson emphasized, “Content creators want a simple way to indicate that they don’t want their content to be used for generative AI training.”
In addition to the app, Adobe is also launching a Chrome extension that enables users to identify images with content credentials. The extension utilizes a combination of digital fingerprinting, open-source watermarking, and cryptographic metadata, ensuring that the metadata remains intact even if the image is altered. Users will see a small “CR” symbol on images that have content credentials attached.
In a landscape where the discourse around AI and art is ongoing, Parson asserts that the C2PA does not aim to define what constitutes art. Instead, he believes that content credentials can serve as a significant marker of ownership. He remarked, “There is a grey area of when an image is edited using AI, but it is not 100% AI-generated. We are advocating for artists and creators to sign their work and claim attribution.” While this does not guarantee intellectual property legitimacy or copyrightability, it does affirm that someone crafted the content.
Looking ahead, Adobe plans to expand its tool's capabilities beyond images to include video and audio files, thus broadening the scope of content credentials in the digital landscape.