Congress Raises Questions On How Deep Fake Technologies Could Affect 2020 Campaign

Jun 13, 2019
Originally published on June 13, 2019 4:10 pm
Copyright 2019 NPR. To see more, visit https://www.npr.org.

AUDIE CORNISH, HOST:

With every passing day, deep fake technologies are becoming more sophisticated. These technologies involve using artificial intelligence to modify familiar faces and voices in order to create convincing - and false - video and audio. The emergence of this technology has serious implications for how we view truth and reality in our society.

NPR's Tim Mak has more on how Congress is raising questions about how this all could affect the 2020 campaign, businesses and our national security.

TIM MAK, BYLINE: What you will hear next was never actually said. It looks like a Facebook office. It looks like Mark Zuckerberg giving an interview. It looks like his mouth is moving along with the words.

(SOUNDBITE OF ARCHIVED RECORDING)

UNIDENTIFIED PERSON: Imagine this for a second. One man with total control of billions of people's stolen data, all their secrets, their lives, their futures.

MAK: But that wasn't the founder of Facebook. It was instead a deep fake created by an artist in the U.K. and posted on Instagram. This technology is quickly becoming a national security concern. The House Intelligence Committee held a hearing Thursday warning of the threat.

(SOUNDBITE OF ARCHIVED RECORDING)

ADAM SCHIFF: With sufficient training data, these powerful deep-fake-generating algorithms can portray a real person doing something they never did or saying words they never uttered.

MAK: That's committee Chairman Adam Schiff. The disaster scenarios are obvious - a deep fake showing a 2020 candidate inciting violence against an ethnic group or false audio of a military official giving orders to mass troops at a border. There is no easy solution. Danielle Citron, a law professor at the University of Maryland, explained the challenge of prosecuting those who might create deep fakes.

DANIELLE CITRON: I would imagine that authenticating and figuring out who the person is who created the deep fake would be hard. And even if you could figure out who that person is, the question is, do they live in the United States? Can we get jurisdiction over them?

MAK: Another route would be to hold social media organizations responsible for the spread of deep fix, but many Republicans believe tech groups harbor an anti-conservative bias and can't be trusted as the gatekeeper. Here's how the top Republican on the committee, Devin Nunes, described them, calling them...

(SOUNDBITE OF ARCHIVED RECORDING)

DEVIN NUNES: These tech oligarch companies - there's only a few of them. You know who they are.

MAK: Up until now, social media companies have a general legal immunity to content posted on their platform, but that may be changing. Listen to this question from Schiff.

(SOUNDBITE OF ARCHIVED RECORDING)

SCHIFF: Is it time to do away with that immunity so that the platforms are required to maintain a certain standard of care?

MAK: Any effort addressing the threat of deep fakes involves increasing public awareness, but even this poses a problem. It's called the liar's dividend. Here's Citron explaining what that is.

CITRON: Once you get everyone all educated, then, you know, you have wrongdoers point to real video - genuine audio and video that show them doing something illegal or wrong, right? - and they get to say, oh, it's a deep fake. Pay no attention.

MAK: These technologies are evolving rapidly, to the point that almost anyone can use it. Here's Lindsay Gorman, the fellow for Emerging Technologies at the Alliance for Securing Democracy.

LINDSAY GORMAN: What used to be something that was available only to sophisticated machine learning experts is now becoming available to the general public.

MAK: So, as the witnesses to Thursday's committee hearing attest, it may only be a matter of time until deep fakes affect our politics and our national security. Tim Mak, NPR News, Washington. Transcript provided by NPR, Copyright NPR.