Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
|
||||||
|
||||||
|
||||||
|
Google has developed SynthID Detector, which allows users to check the authenticity of images, videos, audio, or text made with Google’s AI technology. According to TechCrunch, the tool is designed to verify media files by finding hidden watermarks. Google adds these marks to help people determine whether information was generated by an AI.
This change is meant to fight the rising problem of fake media online. Between 2019 and 2024, the number of deepfake videos has increased by an alarming 550%. Last year, four out of the top 20 most-viewed posts on Facebook in the U.S. were created by AI.
Google first launched its SynthID technology in 2023. It was developed as a way to add invisible watermarks to AI-generated images without lowering their quality. Over time, Google expanded this system to cover other types of media, including audio, video, and text. Today, Google says more than 10 billion pieces of content have been marked using SynthID.
The tool doesn’t just confirm whether the content is AI-generated; it also highlights the parts of the media most likely to be created by AI. This makes it easier for users to understand which sections of a file may not be human made. As a result, users can tell which parts of a file are most likely not created by humans. After you upload the file, the tool searches for the special SynthID watermark that is hidden during creation by AI systems such as Gemini, Imagen, Veo, or Lyria from Google.
In Google’s words, “Today we’re announcing SynthID Detector, a verification portal to quickly and efficiently identify AI-generated content made with Google AI.” The goal is to support media professionals, researchers, and anyone else looking to verify the authenticity of digital content.
“Content transparency remains a complex challenge. To continue to inform and empower people engaging with AI-generated content, we believe it’s vital to continue collaborating with the AI community and broaden access to transparency tools,” Google stated.
Though it’s currently in early testing and not yet available to everyone, the SynthID Detector represents an important step forward in managing AI’s impact on digital content. By helping users detect AI-generated material, it supports a more transparent and trustworthy internet.
Google’s AI models are required for all the content SynthID Detector examines at this point. Firms such as Microsoft, Meta, and OpenAI use their own forms of watermarking to avoid being identified by this tool.
Nevertheless, Google is finding ways to extend the capacity of its watermarking software. Google has decided to open-source SynthID’s watermarking method, so other developers can include it in their AI creations. By doing this, the industry could use watermarking more widely and consistently.
Google is also forming partnerships to grow the SynthID ecosystem. For instance, earlier this year, the company worked with NVIDIA to watermark AI-generated videos produced by NVIDIA’s Cosmos system. The partnership ensures that even content made outside Google can carry SynthID marks. Additionally, Google has partnered with GetReal Security, a top company in content verification, to help others detect these marks more easily.
“To help grow a trusted ecosystem, we’ve already open-sourced SynthID text watermarking,” Google said. The company encourages developers and other tech companies to collaborate and integrate these tools for greater content transparency.