Chip Somodevilla/Getty Images(WASHINGTON) — Days after an artist uploaded a digitally manipulated video online appearing to show Facebook CEO Mark Zuckerberg reciting a sinister monologue about “control[ling] the future,” a real-life panel of experts sat before lawmakers on Thursday with a stark message: This is only the beginning of the problem of “deepfakes,” and there are no easy answers.
“There are a few phenomena that come together that make deepfakes particularly troubling when they’re provocative and destructive,” Danielle Citron, a law professor at the University of Maryland who has written about the dangers of deepfakes, told the House Intelligence Committee. “The first is that we know that as human beings, video and audio is so visceral, we tend to believe what our eyes and ears are telling us.”
The term deepfake refers to video or audio that has been altered with the aid of deep learning technology in which, usually, a person is shown to be doing something they never did or saying something they never said.
One way is by digitally stitching someone’s face on another person’s body, as ABC News’ correspondent Kyra Phillips experienced firsthand in a recent “Nightline” report. Though media has been artificially manipulated for decades, faster computers and easy-to-use, publicly available technology makes convincing fakes increasingly easy to produce and proliferate online, experts say.
The committee hearing dealt with potential solutions to combat deepfakes, from technological tools to automatically detect digital forgeries to legal measures to punish the creators to potential regulations for social media companies, but it also grappled with a more nuanced problem: when manipulated or “synthetic” videos run headlong into questions about the free speech in the First Amendment –- questions with which social media companies are currently grappling.
For instance, most of the expert witnesses agreed that Facebook was correct when it decided not to remove the fake Zuckerberg video.
“I think that’s a perfect example where given the context that’s satire and parody that is really healthy for conversation,” Citron said.
The UK artists who posted the video, Bill Posters and Daniel Howe, also posted to Instagram other obviously doctored videos of President Donald Trump, and celebrities Morgan Freeman and Kim Kardashian, where the figures attributed their success to a fictional organization called SPECTRE.
“In response to the recent global scandals concerning data, democracy, privacy and digital surveillance, we wanted to tear open the ‘black box’ of the digital influence industry and reveal to others what it is really like,” Posters said in a statement.
Facebook, which owns Instagram, declined to remove the fake videos, saying that the company would treat them the same way it handles all misinformation on Instagram. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages”, a Facebook spokesperson told ABC News.
Citron said the fake video also helped to spark a conversation about another manipulated video recently in the news — that one of House Speaker Nancy Pelosi which had been subtly altered in a way that made her appear impaired, an example of what experts called a “cheapfake.”
The Pelosi case exposed a split among the major digital platforms: YouTube took the video down, while Facebook left it up but downranked it to slow its social spread. Twitter also left the video online, suggesting it did not violate its policies.
Citron suggested it was right to take action against such a video.
“For something like a video where its clearly a doctored and impersonation, not satire, not parody, they are wonderful uses for deepfakes that are art, historical, sort of rejuvenating for people to create them about themselves…” she said.
Fellow witness Clint Watts, a senior fellow at the Center for Cyber and Homeland Security at George Washington University, said that the comparison between the Zuckerberg and Pelosi videos was instructive in that it showed how much context mattered when making those kinds of calls.
“No one really believes Mark Zuckerberg can control the future,” Watts said. Watts commended Facebook for sticking to its policies as they stand but said he believes the company is looking to the political leadership for input on how those policies may need to evolve.
Rep. Jim Himes, D-Conn., said that as much as he’s concerned about deepfakes, he’s also concerned about the methods of policing them.
“I do want to have this conversation because as awful as I think we all thought that Pelosi video was, there’s got to be a difference if the Russians put that up, which is one thing, versus if Mad Magazine does that as a satire,” he said. “Some of the language you’ve used here today makes me worry about First Amendment equities, free expression, centuries-long tradition of satirizing people like us who richly deserve being satirized …”
The expert panel suggested a number of methods to help counter the proliferation of deepfake videos. Citron suggested a legal amendment that would make social media platforms more responsible for the content they host. Currently these companies are immune from liability for posts or videos hosted on the platforms because they don’t generate or co-create the content. Citron suggested that this immunity should be conditioned on what she called “reasonable content moderation practices”, putting the onus on the platforms to put rigorous vetting processes in place.
Twitter claims that this immunity, cited by Citron, which is provided by article 230 of the Communications Decency Act, is vital to protect speech on the platform. “Promoting and advancing healthy public conversations is our singular objective as a company. CDA 230 protects this mission — it also strengthens our policy enforcement and moderation capabilities.”
David Doermann, former project manager at DARPA (Defense Advanced Research Project Agency) advocated for a delay to be implemented so that some initial verification can be done before videos are published to social media. “There’s no reason why these things have to be instantaneous”, he told the committee. “We’ve done it for child pornography, we’ve done it for human trafficking. They’re serious about those things. This is another area a little bit more in the middle, but I think they can make the same effort in these areas to do that type of triage.”
According to Twitter, triaging each Tweet posted is neither scalable nor realistic. Twitter says that it will remove any content, including misinformation and deepfakes, that is in not in business of decided what is true online. Therefore many manipulated videos won’t fall foul of Twitter rules.
All of the experts expressed an urgent need to work on the problem as the technology used to create these deepfakes is now widely available. Doermann suggested that even a High School student with a good computer can download software openly over the internet and could create a deepfake video overnight. “It’s not something that you have to be an A.I. expert to run, a novice can run these type of things”, he said.
Facebook says in a statement that is has invested heavily and is working closely with experts in the field to help combat deepfakes and misinformation: “We continue to look at how we can improve our approach and the systems we’ve built. Part of that includes getting outside feedback from academics, experts and policymakers.”
Citron arguably summed up the suggested of the panel best when she said it required a multi-pronged approach to tackle the problem of deepfakes. “There’s no silver bullet. We need a combination of law, markets and really societal resistance.”
Copyright © 2019, ABC Radio. All rights reserved.