Not only are artificial intelligence (AI)-generated images distorting the ongoing Israel-Hamas conflict available in Adobe’s stock image library, but some news outlets are also acquiring and utilizing these images in their articles as if they were authentic.
Adobe’s Stock library now features and showcases images produced using machine learning tools, as announced by the Photoshop giant last year. Content creators have the potential to earn a share of the revenue, ranging from 33 percent to $26.40 per image licensing, due to Adobe’s 33 percent revenue sharing model for these images.
The advancement of relational AI technology has significantly simplified and democratized its usage. Various text-to-image tools now enable individuals to effortlessly create realistic images. Consequently, some individuals have opted to leverage these tools to fabricate images depicting the conflict between Israel and Hamas in Gaza, which are then being marketed on Adobe Stock.
While these images are appropriately labeled as “AI-generated” in the stock collection, this disclosure is often omitted when these images are incorporated into online platforms, including news articles published by less prominent websites that fail to indicate the synthetic nature of the visuals. This trend raises concerns about the potential implications for the dissemination of misleading information.
Crikey, an Australian news platform, highlighted an Adobe Stock photo titled “Conflict between Israel and Palestine conceptual AI,” depicting ominous black smoke emanating from structures, which has been featured in numerous online articles as if it were authentic.
A cursory search on Adobe Stock unveils images portraying bombings, burning vehicles, and destroyed buildings in Gaza that closely resemble AI-generated depictions of the conflict.
As increasingly realistic yet fabricated visual content proliferates online, authorities have issued warnings about the misuse of AI to propagate false narratives, posing challenges for detection and mitigation. Various entities are exploring methods to watermark AI-generated content to enable online users to differentiate between genuine and counterfeit information.
Leading tech and media organizations such as Adobe, Microsoft, the BBC, and the New York Times are collaborating to implement and advocate for Content Authenticity Certificates, a system leveraging document metadata to trace the origin of an image, whether human-created or AI-generated. This initiative is known as the Content Authenticity Initiative.
Despite the conceptualization of Content Certificates, practical implementation remains pending, necessitating collective efforts from social media platforms, publishers, artists, developers, and AI practitioners to ensure its effectiveness.
A representative from Adobe emphasized to PetaPixel that all conceptual AI content submitted for licensing on Adobe Stock must be appropriately labeled. These specific images were categorized as conceptual AI in compliance with this requirement. It is crucial for consumers to be informed about the utilization of conceptual AI tools in the creation of Adobe Stock images.
In an endeavor to combat misinformation, Adobe collaborates with publishers, optics companies, and other stakeholders to advocate for the adoption of Content Authenticity Certificates, integrating this feature into their products. This initiative aims to provide users with essential insights into the creation, editing, and authenticity of online content, including disclosures regarding the use of AI tools in the content creation process.