Microsoft Launches Video Authenticator for Detecting Deepfakes Ahead of US election
Microsoft has launched Video Authenticator, a tool that analyzes videos and still photos to generate a manipulation score. This tool is an addition to the pile of technologies that can spot synthetic media or deepfakes.
According to Microsoft, the tool provides a percentage chance or confidence score that has been artificially manipulated by the media.
In a blog post, it writes in a press release “In the case of a video, it can provide this percentage in real-time on each frame as the video plays.” Adding, “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
In most cases, deepfakes are created with different intentions meaning that it can end up being quite tricky to unsuspecting viewers. This makes identifying visual disinformation a hard problem.
In summer, Facebook started a competition to develop a deepfake detector that results that were better than guessing.
The company is partnering with an AI Foundation based in San Francisco to make the tool available to organizations involved in the democratic process.
Microsoft adds “Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here.”
“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods.