AI

Media & Entertainment

AI and Deepfakes

Will We Believe Our Eyes in 2025?

Frontiers
Frontiers
Frontiers

AI and Media Radar Report

By Jon Dakss, Product & Technology Lead 15.01.25 As AI continues to reshape media at warp speed in 2025, I’m excited to bring you the first in a series of AI and Media Radar Reports from Frontiers, our newly launched AI Product Studio.
 In 2024 the pace of Generative AI progress was truly breathtaking; multiple milestones and new product releases happened weekly, if not daily.
 But as a darker flip side of the AI coin, we saw bad actors create and spread increasingly more realistic AI-generated or manipulated audio, photos and videos, which further eroded public trust in news sources. 
 Who will play a larger role in curbing this in 2025, will it be governments and legislature, Big Tech, or news organizations with reputations on the line?
 2024 began with a number of scandals centered around media manipulation. Public trust in England’s royal family was impacted in March when a number of news organizations retracted official family photos which were found to have been intentionally manipulated by the source. Before the summer, viral AI-generated pornographic pictures of Taylor Swift circulated on X (formerly Twitter), receiving millions of views before the source account was suspended for violating platform policy.
 As we reached summer, we spoke to a number of executives at news organizations who were bracing for the use of AI and deepfakes - originating both domestically and from foreign actors - to significantly affect the US presidential election and other elections abroad.
 But as 2024 continued, the feared wave of deceptive targeted deepfakes didn't really materialize as expected. Instead, the most visible use of AI in many countries was the creation of memes and content whose artificial origins weren't disguised. Often, they were openly shared on social media platforms by politicians and their supporters; this included Donald Trump sharing deepfake images of Taylor Swift seemingly announcing her support for his presidential run and Elon Musk liking and reposting doctored or deepfake videos of Kamala Harris. Did this content swing the election? Most likely not. However, deepfakes and manipulated content not only causes harm to its subjects, it feeds into societal distrust of news media, making it increasingly difficult to regulate and combat fake content. Not only can manipulated content deceive the public, AI-generated content blurs the line between reality and fiction, thus enabling a phenomenon known as the "liar's dividend." First coined by law professors Bobby Chesney and Danielle Citron, they posit that liars aiming to avoid accountability will become more believable as the public becomes more aware of the presence of deepfakes. As people see increasingly realistic deepfakes that are unchecked, false claims that real content is AI-generated become more persuasive as well.
 Will governments and Big Tech introduce significant measures to combat deepfakery in 2025? As of January 2025, 20 states have enacted laws regulating AI deepfakes; California passed eight new laws in September 2024 aimed at addressing various harms caused by AI deepfakes, including election interference and sexual exploitation. 
 With platforms like X and Facebook recently replacing content moderation teams with community commentary, and Google allowing creators to self-label AI generated video content shared on YouTube, Big Tech appears to be moving away from centralized moderation to the honor system and community policing. The onus to maintain credibility for factual news content will fall squarely in the hands of credible news organizations themselves. 
 At the start of 2025 we are already starting to see new initiatives - and innovative new technologies - designed to assert the integrity of respectably sourced audio, imagery and video and to sound the alarm on deepfakes and AI-manipulated material. Smart newsrooms are already starting to build systems to fight AI deception in layers, with trust measures at the top, organizational safeguards in the middle, and technical protection as the foundation. We envision brand new user experiences that enhance how content is consumed, explicitly designed to foster trust and understanding with hard proof of media content provenance as authentic. There is a real opportunity in 2025 for credible news organizations with central editorial oversight and reputations for journalistic integrity to strike a contrast to social networks as reliable sources for news and information. 
 We at Frontiers welcome the opportunity to hear your take on AI and explore ways we can support your ambitions. * Schedule a Strategy Session: Let's explore how we can accelerate your organization's AI initiatives * Rate Your AI Readiness: Take our interactive AI Readiness Assessment to receive tailored recommendations for your organization * Dive Deeper: Read our comprehensive report on "The State of AI and News"

Other insights

Other insights

Other insights

Menu

Menu

Menu

Menu