Tag Archive for: Deepfakes

Deepfakes: When Seeing May Not Be Believing

 

How deepfake videos can undermine our elections, the stock markets and our belief in truth itself

Last weekend, attendees at a conference in Las Vegas got quite a surprise. They were waiting for a presentation by Democratic National Committee Chairman Tom Perez and, instead saw his image on a big screen via Skype. During his brief video appearance, the chairman apologized that he was unable to be there in person. In fact, the voice coming out of Perez’s mouth was DNC’s chief security officer Bob Lord and the audience had just been treated to a deepfake video — video that’s been manipulated to make it look like people say or do things they actually didn’t.

The video was shown in the AI Village at DEFCON — one of the world’s largest hacker conventions — to demonstrate the threat deepfakes pose to the 2020 election and beyond. It was made with the cooperation of the DNC; Artificial Intelligence (AI) experts altered Perez’s face to make it look as if he were apologizing, and Lord supplied his voice. 

Watch carefully as Bill Hader turns into Seth Rogan & Tom Cruise!

https://www.youtube.com/watch?v=VWrhRBb-1Ig’ format=’16-9′ width=’16’ height=’9′ custom_class=” av_uid=’av-mobzww’

[For a CNN video primer on deepfakes, click here.]

Remember when Forrest Gump shook hands and spoke with Presidents Kennedy and Nixon? That was an early example of a deepfake video. More recently, and less innocuous, was a video of House Speaker Nancy Pelosi, altered to make it appear that she was drunk and slurring her words. The video went viral and was viewed more than 3 million times. For some viewers, it confirmed their disdain for Nancy Pelosi; for others, it simply confirmed that they can’t trust the other side of the political spectrum. It’s troubling that neither of these reactions is based in fact.

In the 25 years since Forrest Gump introduced manipulated video, sophisticated AI has been developed to create nearly undetectable deepfakes. Not only has the technology improved, but the nefarious uses have proliferated. Take deepfake porn, where a victim’s face is superimposed on the body of a porn actor. It’s often used as a weapon against women.  Actress Scarlett Johansson, whose face was grafted onto dozens of pornographic scenes using free online AI software, is one famous example, but it happens to ordinary women, too. Just for a second, imagine your daughter or son, husband or wife being targeted by deepfake porn to destroy their reputation or settle an old score. As the technology becomes less expensive and more available, that is what we face.

Until recently, the warnings about deepfakes in the U.S. have focused on political ramifications, most notably their expected use in the 2020 election. Imagine a doctored video of a candidate saying they’re changing their position on gun control, for example. In June, during a House Intelligence Committee hearing on the issue, experts warned that there are multiple risks, including to national security: What if our enemies post a video of President Trump saying he’s just launched nuclear weapons at Russia or North Korea? 

The business community and financial markets may also be targeted. A video of Jeff Bezos warning of low quarterly Amazon results could cause a sell-off of Amazon stock, for instance. Bezos has the platform to quickly respond, but by the time he’s corrected the record, real damage could be done. Similarly, CEOs of lesser-known companies could be targeted, say the night before an IPO or new product launch. 

There’s currently a kind of arms race occurring between those developing deepfake technology and those developing ways to detect the altered videos — and the good guys are losing. 

In an interview with The Washington Post in June, Hany Farid, a computer science professor and digital forensics expert at the University of California at Berkeley said, “We are outgunned. The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”

Soon, we may all be awash in deepfake video, unable to detect truth from fiction, and this is perhaps the most worrying aspect. We already live in an age where more than half the U.S. population distrusts the media, the government and their neighbors, and belief in conspiracy theories is on the rise. A staggering one in 10 Americans don’t believe we landed on the moon — 18% of the nonbelievers are between the ages of 18 and 34 — and a 2018 poll found that one-third of Americans don’t believe that 6 million Jews were murdered in the Holocaust. That’s to say nothing of the people worldwide who deny the Holocaust ever happened. 

Historically, the best way to refute conspiracy theorists has been video proof. The countless hours of footage of the Allies liberating concentration camps, the 9/11 attacks and that grainy film of Neil Armstrong planting the American flag on the moon couldn’t be denied. Until now. 

In a statement, the House Intelligence Committee said, “Deep fakes raise profound questions about national security and democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens.”

In other words, we’re entering an age when seeing is no longer believing. 

This is all part of a larger movement where technology is used to erode trust, and in the hands of foreign enemies like Russia, it can and will be used to undermine our belief in the free press, in our leadership and in democracy. It is essentially the use of the First Amendment to undermine the First Amendment, and unethical corporations and cyber criminals will hop on board as soon as the AI technology is affordable to the mass market.

So where is the hope in all of this? Unlike our weak regulatory response to the malicious tools that have come before — from viruses to spyware, botnets to ransomware — we must combine comprehensive legislative oversight and control with the ethical use of technology to proactively minimize the problem before it becomes mainstream. Our senators and representatives must take the lead in setting standards for how AI technology, including technology to produce deepfakes, is released, utilized and policed. 

Bruce Schneier’s book, “Click Here to Kill Everybody,” includes an excellent primer on the regulatory framework that would start us down the path. We, as voters, must directly express our concern to congressional leadership and urge them to act before a proliferation of deepfake videos destroys reputations — along with our ability to believe our own eyes.


About Cybersecurity Keynote Speaker John Sileo

John Sileo is an award-winning author and keynote speaker on cybersecurity, identity theft and tech/life balance. He energizes conferences, corporate trainings and main-stage events by making security fun and engaging. His clients include the Pentagon, Schwab and organizations of all sizes. John got started in cybersecurity when he lost everything, including his $2 million business, to cybercrime. Since then, he has shared his experiences on 60 Minutes, Anderson Cooper, and even while cooking meatballs with Rachel Ray. Contact John directly to see how he can customize his presentations to your audience.

[elfsight_social_share_buttons id=”1″]