A realistic video of a news anchor reporting on wildfires across central Canada circulated on social media last week, complete with synchronized lip movements and natural breathing sounds. The footage was entirely fabricated by Google's new artificial intelligence tool, Veo 3, in what experts call a watershed moment for synthetic media.
The emergence of AI video generators that create convincing fake content with unprecedented ease has intensified concerns about misinformation on social platforms, as the technology becomes virtually indistinguishable from authentic footage.
Google's Veo 3, launched in late May, represents a leap forward in AI-generated video quality. Unlike previous tools, it produces content with dialogue, soundtracks and sound effects while largely following the rules of physics1. A TIME investigation found the platform could generate misleading clips about news events, including fake footage of Pakistani crowds setting fire to a Hindu temple and election workers shredding ballots2.
"What makes deepfakes especially dangerous is how easily cybercriminals can replicate anyone, making them do anything, and make it appear real," according to researchers at the University of Maryland3.
The technology has democratized sophisticated video manipulation. One researcher created a deepfake lecture video in eight minutes for $11 using commercially available AI platforms4.
The proliferation of synthetic content has begun undermining public confidence in visual media. Research from Toronto Metropolitan University's Social Media Lab found 59 percent of 1,500 Canadians surveyed have lost trust in online political news due to fears of fabrication or manipulation1.
"Even as we enhance our critical thinking skills and prioritize truth over sensationalism, we may find ourselves in a situation where trust is elusive," said Angela Misri, a journalism professor at Toronto Metropolitan University who studies AI and ethics1.
Nina Brown, a Syracuse University professor specializing in media law and technology, identified the erosion of collective online trust as the most concerning development. "There are smaller harms that cumulatively have this effect of, 'can anybody trust what they see?'" she told TIME. "That's the biggest danger"2.
Meta removed hundreds of advertisements promoting "nudify" deepfake tools from its platforms following a CBS News investigation, highlighting the ongoing challenge social media companies face in combating synthetic content1.
"Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated," said Julia Smakman, a researcher at the Ada Lovelace Institute2.
The technology's rapid advancement has outpaced detection methods, creating what cybersecurity experts describe as an escalating arms race between creators of synthetic content and those working to identify it.