AI and Deepfakes – Part 1

In a collaboration between Academy Award winning writer Jordan Peele with MonkeyPaw Productions and Buzzfeed, a video was created as a PSA about deepfakes. Peele, who is known for his vocal and physical impersonation of Obama on his show sketch comedy show Key and Peele performs this with the sole motive of informing viewers about the ease in production, concerns about the tech and how it can influence perception if used with malignant intentions. 

In 2018, when this was produced, this was awesome! 

And scary. But mostly awesome. But also scary.

Deepfake kya hota hai?

A contraction of deep learning and fake, Deepfakes are video and audio content who’s purpose is to look and sound like reality. Think of it like ‘Photoshopping’ (or the actual term – doctoring photos) but for film and audio. 

Most notably, we’re used to face morphs through Instagram and Snapchat filters that make us look like an anthropomorphic dog with a rather long tongue. Although those are not technically ‘deepfakes’, they use the same principle. 

Deepfakes are one of those things that can be used for fun, to drive a point, to enable inevitable change of power in a democracy or simply to create something for laughs. They’re like social media. Great when you do it for fun – scary when the government or bullies get involved. 

To give you perspective on how they work, let’s discuss talk about GAN. In a previous essay, AI and Creativity, we mentioned that GAN or Generative Adversarial Networks are essentially competing codes of AI that create and correct data based on specific input. Although they are used for editing, data mapping and post production, making deepfakes is one of their naughtier use cases. GAN, with the right input, can actually form new images of human faces – that don’t actually exist. 

So can anyone make deepfakes?

Yes. 

Regardless of your understanding about AI, or coding or even computers in general, it has become shockingly easy to make basic deepfakes for anybody. There are tools available today that allow you to make them live on websites (cloud based), download apps on your device (native storage based) and multi-process models that use a combination of deep learning algos, digital photo editing, skilled digital video capture and more often than not the right face shape and voice. This last example was given purely because through the long form process of making deepfakes, you can get shockingly real results like this one where a Tom Cruise impersonator was transformed into the actor

There are dedicated YouTube channels for deepfake celebrity content from Keanu Reeves – beloved actor and internet meme, transformed into Ajay Devgan in Singham. Or this one of Tom Cruise as Iron Man. 

P.S. Searching for “Keanu reeves Singham” on YouTube is a strange feeling. Try it.  

In one of our sessions at SCoRe, we got a chance to test out Reface – a freemium app that allows anyone to use selfies or a friends photo on a templated celebrity movie scene or montage. Results vary in success but Reface made the technique mainstream. And once anything goes mainstream, costs drop significantly and focus shifts to good enough from ‘worth the money’. This further means that copy cats will probably flood the market with similarly named apps in the coming months. 

You thought the #10YearChallenge from Facebook stole your facial data. 

You ain’t seen nothing yet!

So all deepfakes are cool, right?

No.

There is a dark side.

There are 3 primary reasons deepfakes get a bad reputation. 

Impersonation: Bill Posters and Daniel Howe, 2 digital artists created short clips by Kim Kardashian, Mr. Zuckerberg and the Donald as teasers for their art Installation SPECTRE

(I urge you, dear reader, if you had to click just 3 links today, click the names in the previous sentence. Trust me.)

Although the purpose of their goal was to promote their art project that actually aims to increase conversations about the subject of data privacy and deepfakes amongst other things, the principle is solid. Anybody with motivation, editing software and decent audio morphing tech could potentially release damaging content as the opinion of anyone and influence their ‘followers’. 

Update: There was a little bit of a fallout from the news division of CBS about the uncredited and unauthorised use of their logo in Zuck’s video but that was it. 

Pornography: In an article by MIT Tech Review, the author writes about a Reddit user creating nonconsensual fake porn using AI to morph faces onto existing pornography performers. Wired.com estimates that about 50,000 clips of this nature exist and 1000 pornographic deepfake clips are uploaded every month. This problem is made exponentially worse by the nature of digital media that can be shared, copied and morphed without finding the OP or Original Poster. Actors across geographies including the US, the UK, India, China, Japan and Australia have been targeted. 

Spread of ideas: Like the Obama video at the beginning of this essay, deepfakes can be used to impersonate and spread ideas based on their ideologies. Manoj Tiwari, famed actor and sitting MP from New Delhi had an interesting 2020. He released a video in early Jan about a citizen inclusion bill that passed the Rajya Sabha. Standard political affair – if anything this was an decent example of a good propaganda strategy. Later that month, another eerily similar video was released that made use of deepfake tech to attack the ruling party in Delhi.

According to a VICE report:

these deepfakes were distributed across 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people.

So deepfakes are everywhere. It sucks. 

Well, not exactly.

In part 2 of this essay, we’ll discuss the legal protection, the future of deepfakes and how the average Internet user should approach deepfakes.

Stay connected.


The views and opinions published here belong to the author and do not necessarily reflect the views and opinions of the publisher.

Jai Bahal
Jai Bahal - Co-Founder @ NAVIC
Founded by collaborators Jai Bahal (Adfactors Digital) and Nishant Patel (DigiOsmosis), NAVIC aims to educate, inform and train students, professionals and entrepreneurs about the future of communications. NAVIC has collaborated with SCoRe for its flagship course: EVOLVE – A first of its kind curriculum that discusses hyper-relevant subjects like Meme Marketing, Trolls and Bots, AI in communications and more

Be the first to comment on "AI and Deepfakes – Part 1"

Leave a comment

Your email address will not be published.


*