Mon. Aug 4th, 2025

‘Universal’ detector spots AI deepfake videos with record accuracy

SEI 260830198


SEI 260830198

A deepfake video of Australian prime minister Anthony Albanese on a smartphone

Australian Associated Press/Alamy

A universal deepfake detector has achieved the best accuracy yet in spotting multiple types of videos manipulated or completely generated by artificial intelligence. The technology may help flag non-consensual AI-generated pornography, deepfake scams or election misinformation videos.

The widespread availability of cheap AI-powered deepfake creation tools has fuelled the out-of-control online spread of synthetic videos. Many depict women – including celebrities and even schoolgirls – in nonconsensual pornography. And deepfakes have also been used to influence political elections, as well as to enhance financial scams targeting both ordinary consumers and company executives.

But most AI models trained to detect synthetic video focus on faces – which means they are most effective at spotting one specific type of deepfake, where a real person’s face is swapped into an existing video. “We need one model that will be able to detect face-manipulated videos as well as background-manipulated or fully AI-generated videos,” says Rohit Kundu at the University of California, Riverside. “Our model addresses exactly that concern – we assume that the entire video may be generated synthetically.”

Kundu and his colleagues trained their AI-powered universal detector to monitor multiple background elements of videos, as well as people’s faces. It can spot subtle signs of spatial and temporal inconsistencies in deepfakes. As a result, it can detect inconsistent lighting conditions on people who were artificially inserted into face-swap videos, discrepancies in the background details of completely AI-generated videos and even signs of AI manipulation in synthetic videos that don’t contain any human faces. The detector also flags realistic-looking scenes from video games, such as Grand Theft Auto V, that are not necessarily generated by AI.

“Most existing methods handle AI-generated face videos – such as face-swaps, lip-syncing videos or face reenactments that animate a face from a single image,” says Siwei Lyu at the University at Buffalo in New York. “This method has a broader applicability range.”

The universal detector achieved between 95 per cent and 99 per cent accuracy at identifying four sets of test videos involving face-manipulated deepfakes. That is better than all other published methods for detecting this type of deepfake. When monitoring completely synthetic videos, it also had more accurate results than any other detector evaluated to date. The researchers presented their work at the 2025 IEEE/Conference on Computer Vision and Pattern Recognition in Nashville, Tennessee on 15 June.

Several Google researchers also participated in developing the new detector. Google did not respond to questions about whether this detection method could help spot deepfakes on its platforms, such as YouTube. But the company is among those supporting a watermarking tool that makes it easier to identify content generated by their AI systems.

The universal detector could also be improved in the future. For instance, it would be helpful if it could detect deepfakes deployed during live video conferencing calls, a trick some scammers have already begun using.

“How do you know that the person on the other side is authentic, or is it a deepfake generated video, and can this be determined even as the video travels over a network and is affected by the network’s characteristics, such as available bandwidth?” says Amit Roy-Chowdhury at the University of California, Riverside. “That’s another direction we are looking at in our lab.”

Topics:

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *