If you’ve ever wanted to convince your friends you can dance like Ariana Grande or Drake, or fly like Superman, now you can. Just stick your face on the dancing superstar or aerobatic superhero and run it through deepfake software like FakeApp.

Deepfake is like Photoshop, except instead of editing photos to remove pesky ex-boyfriends from family photos, it lets anyone modify any video. It uses artificial intelligence, machine learning, facial mapping and other advanced technologies to seamlessly integrate or superimpose any image you wish – say, a person’s face – into a video.

Going over to the dark side

On the surface, all this sounds like a fun way for kids to waste a Sunday afternoon and have some fun on Instagram. But like any multimedia technology, it can be abused. And not just by kids.

The rise of super-realistic-but-faked videos only adds to worries that deepfakes will be increasingly used to commit fraud, extortion and other digital crimes.

For example, it’s already been used by some to put celebrity faces on porn actors’ bodies, and being passed off as real leaked photos. Indeed, so-called “deepfake porn” is gaining popularity in the Internet’s darkest corners, and the rise of super-realistic-but-faked videos only adds to worries that deepfakes will be increasingly used to commit fraud, extortion and other digital crimes.

Another worrisome factor is that we now live in the age of “fake news” and deepfake stands poised to erode the credibility of videos in media and legal and political cases. For as long as there’s been video, the assumption has been clear: If it’s on video, it must be true. We implicitly trusted what we saw onscreen because there was no easy or accessible way to modify the scene.

This newfound ability to modify videos at will means the integrity of the very medium is no longer sacrosanct. When we look at photos, we often ask ourselves whether they were Photoshopped. Thanks to deepfakes, we’ll now do the same thing with videos.

If you thought fake news was enough of an online scourge now, imagine what happens when you add deepfake video into the mix. A political operative with an axe to grind can easily edit and release something that isn’t quite what it seems. Any sufficiently virulent deepfake can spark misguided online debate and media coverage – or, worse, violence – before anyone even realizes it’s been doctored. Even if videos are later discovered to be doctored, the fake version is already out there and could be shared by millions.

Frightening political implications

More like former U.S. President NObama, am I right?

Deepfakes can also literally put words in people’s mouths, as BuzzFeed proved last year when it released a deepfake video of former U.S. President Barack Obama saying things he never actually said. The quality of this proof-of-concept project was stunningly high – enough to initially convince some viewers it was real, unaltered footage.

As Canada counts down to the 2019 federal election, and the US in 2020, rest assured all federal parties are acutely aware of the potential for deepfake. Count on compromised videos showing up in your feed soon, if they haven’t already begun appearing.

Indeed, the White House last November released a badly-doctored video of a testy exchange between CNN reporter Jim Acosta and a government aide. As amateur as that effort was, we’d be naive for believing governments and political party-employed multimedia personnel aren’t already honing their video editing – and deepfake – skills.

So can we catch these fakesters in the act?

kids watching smartphone video

As with Photoshop, your mileage may vary. Doctored photos inevitably contain artifacts, or evidence, of tampering. It might be a stray pixel, a ragged cut line, or an unnatural shadow, light source, or area of coloring that allows eagle-eyed folks to spot where a photo was modified.

Skilled Photoshoppers may be skilled at covering their tracks, but no modification is ever absolutely perfect or undetectable. The same logic applies to deepfake videos, and while artificial intelligence-based detection tools – such as those used by the Gfycat short-video sharing site – are still in the early stages of development, they will improve over time. US Defense Advanced Research Projects Agency (DARPA) has launched its Media Forensics program to develop anti-deepfake tools, and a research team at the State University of New York in Albany have built a technique to identify deepfakes without requiring massive computational resources.

Early results are promising, but it’ll take years to build truly comprehensive protective tools. Until then, check sources and look for corroborating evidence from other sources. Don’t trust it simply because it showed up on your Facebook or Instagram feed. Deepfakers who spread misinformation are counting on your naivete. Don’t give them the satisfaction.

Source