Google’s Testing a Digital Watermark to Detect AI-Created Images in the Fight Against Misinformation.
In an effort to combat disinformation, Google is currently experimenting with a digital watermarking system designed to identify images produced by artificial intelligence (AI). The technology, developed by DeepMind, Google’s AI subsidiary, goes by the name SynthID and is engineered to spot machine-generated images.
The method involves subtly altering individual pixels within images, rendering the watermarks invisible to the human eye while remaining detectable by computer algorithms. However, DeepMind acknowledges that the system is not entirely immune to extreme forms of image manipulation.
As AI-generated images continue to gain prominence, distinguishing between authentic and artificially crafted visuals has become progressively intricate, as demonstrated by quizzes like BBC Bitesize’s AI or Real.
Popular tools like Midjourney, boasting over 14.5 million users, have made AI image generation mainstream. These tools allow individuals to swiftly generate images using simple text instructions, sparking debates on global copyright and ownership concerns.
Google has its own image generation tool known as Imagen, and the watermarking system it’s developing will exclusively apply to images generated through this platform. Typically, watermarks consist of logos or text overlaid on images to indicate ownership and deter unauthorized use.
Presently, the BBC News website employs copyright watermarks in its images, usually positioned in the bottom-left corner. However, these traditional watermarks prove ineffective at identifying AI-generated content, as they can be effortlessly edited or removed.
Tech giants employ hashing, a technique that generates digital “fingerprints” for known instances of abusive videos. This helps identify and swiftly remove such content from online platforms. Nonetheless, these fingerprints can be compromised if the video is altered or cropped.
Google’s system introduces an essentially imperceptible watermark, enabling users of its software to instantaneously determine whether an image is authentic or AI-generated. According to Pushmeet Kohli, the head of research at DeepMind, the modifications made to the images are so subtle that they remain imperceptible to the human eye.
Kohli explains that unlike hashing, the watermark persists even after subsequent edits or cropping, allowing the software to continue identifying the image’s origin. He emphasizes that this launch is an experimental phase and that user engagement will be crucial in determining the system’s resilience.