How much of a danger to democracy are deepfake videos?
PUBLISHED: 07:00 09 November 2019 | UPDATED: 10:02 11 November 2019
Deepfake. It's a phenomenon that's got politicians hot under the collar and cyber security specialists on the back foot.
It's a phenomenon that's got politicians hot under the collar and cyber security specialists on the back foot.
Deepfakes are videos which use artificial intelligence to produce lifelike audio and visuals of real people carrying out fictitious actions.
Deepfakes work by gathering hundreds or thousands of images of a person and using AI to stitch these images together to produce moving footage.
Combined with an audio track, the result is that these people - particularly well-photographed celebrities or public figures - are presented as doing or saying something they have not.
MORE: New Silicon Stakeholders: How the East can build its own valley Perhaps the most widely shared deepfake video (a term first coined on social news site Reddit) was produced by BuzzFeed.
The video appeared to show Barack Obama making a public service announcement warning against deepfakes, before going on to make some controversial remarks.
It is revealed 40 seconds into the video that this is actually a deepfake video, with comedian Jordan Peele providing a voiceover.
Deepfakes originally debuted on porn sites in late 2016 early 2017, with celebrities seeing their faces superimposed into the footage.
According to research from cyber security company Deeptrace, 96% of deepfakes are pornographic - and the amount of clips has increased from 7,964 in December 2018 compared to 14,698 this month.
Over the past few years they have slipped further into the mainstream, meaning they are both easier to create and to come across
You may also want to watch:
"Deepfake technology is like an arms race," said Dr Oli Buckley, a senior lecturer in cyber security at the University of East Anglia.
"Every time we find a way to identify a deepfake image the trolls move on and advance the technology to cover the trait. It's a vicious cycle because as soon as academics write a paper about how to spot a deepfake we're telling their creators what they're doing wrong."
He explained: "For a while we were able to use blinking as a tell of a deepfake. All of the images that were being used to create the video had the person's eyes open, so you could tell it was a deepfake if the person in the image didn't blink or blinked in a strange way."
However the technology has now advanced to cover that.
"Now we can look at identifying deepfakes by other traits. Are the people in the video moving their hands around while they talk? Are their eyebrows and facial expressions moving as they do? These are the things people do in normal conversation, but you don't see them in deepfakes," Dr Buckley went on.
Altered videos are now becoming so convincing that even victims of these trolls are falling for them.
One example is a video shared by Donald Trump of speaker of the house Nansi Pelosi supposedly slurring her words, which the president claimed to supposedly show alcoholism or mental health issues.
The video is not a deepfake and is simply slowed, but is a prime example of why deepfakes are going viral.
As to why deepfake images spread particularly quickly on social media, Dr Buckley reasoned that this is down to a shared belief system. He explained: "On social media you've built your own community which largely share your beliefs. So they spread because deepfakes appear to reinforce the opinions you have."
With a general election in the UK around the corner and a US election scheduled for November next year, policy makers are ramping up pressure on social media platforms to cut down on "fake news".
Just last week, Alexandria Ocasio-Cortez grilled Mark Zuckerburg on the policies around the spread of fake information and images on Facebook.
Mr Zuckerburg confirmed that following the Cambridge Analytica scandal of the last election, Facebook would be taking some measures to prevent the spread of fake news to the detriment of democracy.
"I don't know to what extent it's the platform's responsibility to monitor content," said Dr Buckley. "Obviously if it's clearly illegal in its content: it's racist, homophobic, sexist, and so on, it needs to be removed. The only way I think consumers can truly find out if what they're is seeing genuine is by looking at multiple sources. There's also websites like Snopes.com which have been set up to fact check internet sources."