Move over, Nancy Pelosi.
A “deep-fake” video featuring the likeness of Facebook CEO Mark Zuckerberg declaring “whoever controls the data, controls the future” has surfaced, triggering a new round of questions (and smirks) about how to deal with the rise of doctored videos on the eve of a Congressional hearings on the matter.
Facebook was hit with harsh criticism last month when it refused to pull from its platform a crudely altered video of House Speaker Nancy who appeared to be drunkenly stumbling over her words. President Donald Trump shared the clip on Twitter with the caption, “PELOSI STAMMERS THROUGH NEWS CONFERENCE.”
The Pelosi video is likely to get plenty of attention at tomorrow’s hearing convened by the House Intelligence Committee “on the national security challenges of artificial intelligence (AI), manipulated media, and ‘deepfake’ technology.” The House committee is concerned not only with the national security implications, but, it said in a statement, with “democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens.”
At the rate the technology is progressing it won’t be long before board rooms take up the matter too. Adobe Research and others demonstrated just this week a deep-fake tool powered by text-to-speech machine learning algorithms that can literally put words in the mouth of whomever appears in a video.
The digitally altered Zuckerberg video, which runs less than 20 seconds, appeared on Instagram over the weekend. To make the fake, two British artists used AI tools developed by the Israeli digital media company, Canny AI, which runs with the prominent tagline “Storytelling without Barriers” on its homepage. The video languished in obscurity at first, but then went viral in recent hours, collecting more than 30,000 views and counting. By this morning, discussion of the Zuckerberg deep-fake was trending on Twitter.
In the video, Zuckerberg appears to be speaking to CBS News. A banner saying, “Zuckerberg: Announces New Measures to ‘Protect Elections’” appears at the bottom of the screen. But his words tell a different story. “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” he begins.
The video is unlikely to hoodwink anyone, but it’s hardly a flattering look for Facebook and Zuckerberg. Deep-fake Zuck, as the AI-powered character is being called, “looks quite a bit like a Weekend At Bernie’s-style corpse-marionette,” Gizmodo quips.
In a statement emailed to tech journalists, a spokesperson for Instagram said they will leave the fake video up—for now. “We will treat this content the same way we treat all misinformation on Instagram. If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”
The video emerged just before the release of a new report on deepfakes and synthetic media by Witness, a human rights organization that has been organizing training sessions with journalists and social justice activists on how to use the latest technologies to safely report on abuses or power. With deep-fakes, the concern among activists and journalists is that the technology will be used to discredit the authenticity of their work and even attack them personally, said Sam Gregory, the program director at Witness. “This is an example of an emerging threat” to spread misinformation and doubts about human rights work, he said.
One of the problems Gregory sees around deep-fakes is what he calls “the tools gap.” The technology to build deep fakes is ramping up quickly; you don’t even need to be technically savvy to use them. But there are far fewer resources available to detect the fakes once they’re in the wild. Witness has discussed with technology firms the importance of them sharing the underlying training data that goes into their deep-fakes algorithms. “As companies release products that enable creation, they should release products that enable detection as well,” Gregory says.
The role technology companies play in the proliferation of this technology will be a big talking point at tomorrow’s hearing on the Hill. Scheduled to testify are law and IT professors, plus a policy advisor at OpenAI, an AI think tank funded by Reid Hoffman’s charitable foundation and Khosla Ventures.
Facebook, which has been embroiled in the deep-fakes controversy since the emergence of Pelosi video, is not scheduled to send anybody to participate in the hearings.