Utopias, Dystopias and Today's Technology

Diving Deep into Deepfakes: excuse me, that’s my face!

March 08, 2023
Utopias, Dystopias and Today's Technology
Diving Deep into Deepfakes: excuse me, that’s my face!
Show Notes Transcript

At times chilling, at times inspiring, guest Rajendra Shroff leads host Johannes deep into the world of deepfakes and their potential for both political manipulation and artistic expression. They discuss the devastating case of journalist Rana Ayyub, who fell victim to a deepfake sex film that featured her face without her consent, as cited in Danielle Citron's riveting TED talk on this subject: https://www.youtube.com/watch?v=pg5WtBjox-Y&t=1s

They discuss the implications of the deepfake inflection point at which it becomes costless for everyone to make such videos of anyone else. 
 
The conversation then shifts to the potential of deepfakes beyond their negative implications, including the use of the technology to bring deceased actors back to life and augment the work of human artists. Shroff highlights the power of deepfakes and other forms of artificial intelligence as new tools in the artist's toolkit, tools that can help to create new forms of expression, which themselves are deeply rooted in human experience. Tune in to hear about the darker and more optimistic aspects of deepfakes and their implications for our world.

Prescient fictional accounts of deepfakes were briefly discussed: 
Futurama episode: https://www.youtube.com/watch?v=NVs8_DzaRrE&t=15s 
Twenty Minutes into the Future (Max Headroom): https://www.youtube.com/watch?v=aZY-yQYVf38

Johannes Castner:

Hello and welcome. My name is Johannes and I'm the host of this show. Today I am here with Raj Shroff, who is both in, in business and in academia. He is, um, a, an an AI consultant at Blue Artificial Intelligence, which is a strategic consulting firm that. That is specializing on AI and digital, uh, on, on, on digital services. And he's also a university lecturer on AI and emerging tech for business. And he does this in, and you do this in, uh, in Hong Kong, is that right?

Rajendra Shroff:

Well, yeah. So, uh, I'm actually based primarily in Canada, in Toronto. Uh, but projects, uh, teaching engagements, take me all over the world. I do some of this in Hong Kong, um, where I actually grew up. Uh, I teach also in Thailand and just here and there in Canada as well.

Johannes Castner:

Fantastic. Um, so you are, uh, an internationalist like myself. Um, let's say, um, let me just dive right into this topic today. We we're going to try to dive deep into deepfakes. And, uh, pun is intended. Um, and, uh, yeah, let's start by, uh, let me ask you this question. The, the foundational question of what are deep defects?

Rajendra Shroff:

Yeah, that's a great place to start and, um, I, I may be missed this, but thank you for having me on. This is gonna be a lot of fun. Fantastic topic. So what are deep fix, an accessible kind of, uh, explanation for that

is:

deepfake is just a combination of two words using deep learning to create some kind of fakery. To take this into the next level of detail, it's basically using machine learning techniques to alter, to digitally alter or even, uh, fabricate images, videos, and audio. Now, if we let that definition sink in, we think about, oh man, this is gonna be used for, and I'm sure we'll get into. Um, but in terms of the social and business impact, it could be fantastic or frightening, uh, depending on how people use this kind of technology.

Johannes Castner:

I see, I see. So before, before we get into the obvious, well, not necessarily obvious, but well, the, the, let's say perplexing ethics of this problem, let's start. Let's go one step deeper and could you just tell me how, how do these deep fakes work?

Rajendra Shroff:

Yeah, so techniques that generate deep fakes are actually quite new. Um, I believe a gentleman by the name of Ian Goodfellow. Um, develop the algorithm, uh, that was foundational for most modern deep face called generative adversarial networks. So there are class of neural networks. So how did these, how is this technique used to create a fake video that is very realistic? How is it used to create an image that, uh, looks like a real human being? So, If you're familiar with a lie detector test, uh, that's a good analogy here. So on the one side, you have a human being trying to trick a machine into accepting a lie as a, as a truth. So the human being is one side of the coin. Human on the other side is the actual lie detector machine. It's fed some truth. It learns what is true based on heart rate and tone, and it also learns to detect what is a lie. So the lie detector and the human being, they go back and forth. Um, you can imagine where the human tries to convince the lie detector that it's telling the truth that he or she's telling the truth. And the lie detector is trying to say, okay, how do I catch out this specific falsehood that this person is saying? Now, generative adversarial networks or GANs have a similar structure, If you think about it. So there are actually two separate neural networks. One neural net is called the generator, which is trained to generate images, and the other neural net is called a discriminator, which kind of looks at the output that the generator is creating and tries to determine is this digitally altered or is this like a genuine image? And these two neural nets, kind of like human beings and machines are going back and forth, they're competing. And because of the feedback they received through the training process saying, okay, um, this image was convincing. It fooled the discriminator, or, um, this image was not convincing. You have to do a better job. Mr. Generator. The feedback loops all the way through the system and each of these neural nets, they get better over time. So the generator gets better at generating realistic images. The discriminator gets better at detecting fakes, and the end results is the system is able to pass off a very realistic looking image. Um, but it is actually fake. So in a nutshell, that's how this, um, this technique called GANs, they actually work in the back.

Johannes Castner:

Yeah, that's, that's quite incredible. Um, specifically, it seems that this is a very hard task even for a human, and, and, and I guess it's, it's beating humans because humans are in fact fooled by this technology, isn't it?

Rajendra Shroff:

Yeah. And um, the thing that makes this a little frightening is most people, when they view something on social media, whether it's an image or a video, They actually don't view it in great detail. They're not forensic accountants. They're not saying, I'm gonna look at devote 100% of my attention to this, uh, video on Twitter or video on TikTok. They just view it in passing and say, okay, I'm gonna process what that video told me and move on. So they never actually ask themselves, is this real or fake? Because of that, these deep fakes, they don't have to be 100%., they just have to, they just have to convince enough people, um, to say, okay, that might have been real.

Johannes Castner:

Yeah. And so that this is, uh, if you then couple that with the human tendency of believing what they want to believe, they, you don't need to fool them very much. They already wanna believe that. And so if you have a story about a politician, Or a public figure of any sort, an actor you could say, um, about, um, about something that they might have done something, I dunno, attacked someone or even pornography. So the idea of pornography, yeah, I'll talk about that in a second. There's, uh, a very, uh, important case there. Um, but in general that, that will have tremendous effects on politics; will it not?

Rajendra Shroff:

Yep. Yeah, so the effect you're alluding to is actually called confirmation bias, which is a very Yes, uh, established psychological term, a study. And it's basically saying when we see something, uh, we're more likely to believe it if it already conforms to our. Um, existing views and existing biases, and that's actually quite dangerous. It's one hell of a drug confirmation bias because you could feed somebody, either text it, it doesn't have to be video or deepfakes, it just be text. It could be an article, it could be full of logical, uh, holes. But if, if that article is structured in a way to cater to the person's existing political views, or existing biases, it's more likely to be believed. So bringing this back to deepfakes, let's just take the example, uh, hypothetical example of our favorite, um, celebrity businessman, Elon Musk. Um, he has a history of saying some outlandish things. Uh, he appeared on Joe Rogan's podcast, smoking marijuana, got in trouble for that. So altering the public imagination. They think, oh, this guy is a little bit, um, shall we say, creative and he can say things that are slightly outside of the bounds of acceptability for someone of a stature. Now, as deepfake tech becomes not easier, but more convincing, uh, it is quite possible to have somebody release, release a video that is fake. Saying that depicts Elon Musk saying in a public platform or on a podcast or an interview saying, by the way, yeah, Tesla's accounting totally fake. Uh, we just make up the numbers because Wall Street is a scam. It's a game that we just have to play. Now, for those of you that, um, know something about how financial markets operate, when right before the earnings are announced for the company, there's like a quiet period. The management is not allowed to comment or comment too much. So if this deepfake is released around the time of Tesla's earning announcement, what do you think is gonna happen to the stock price? This is an isolated company. It's a specific industry. You're not really affecting people's lives, but even in this case, it can do a lot of financial damage. Now let's take this same kind of situation, apply it to somebody's life if a deep fake video is intended to attack somebody personally. What's that gonna do? Uh, both to individual and to,

Johannes Castner:

okay, so that's, uh, let's do this via the example of, of of Rana. Uh, of Rana Ayyub, a, an Indian journalist who was involved in uncovering a lot of human rights violations. She also, you have to say, in this case, one to be explicit about this here, she was also of the Muslim faith, faith. Therefore having a pornographic film with her has particular cultural implications. Um, you know, I, I cannot claim to know what all of those implications are, but I can imagine them to be quite severe. And then there, so she, she was a journalist. She was exposing, perhaps exposing some government crimes. And so it was, it was. And suddenly there was this sex film, uh, about her made with her face, using her face. So, so, so the, the implication here is you can take someone's face and then, and then make them do anything you want. Right? This is one of this implications. And so here, there was this career woman, someone who had a job as a journalist. She still does, by the way. So, so that's, that's reassuring. Um, but, but you could imagine the implications and, and so let's, let's do what you were just starting to say near this example. I mean, what, what kind of incredible implications does this have you for, for, for example, um, authoritarian regimes that, you know, this is not exactly, I'm not sure how authoritarian Indian India is. But you can see this going in Russia, in, in many places, being used against journalists, for example. Yeah, it's, it's disastrous. You wanna say anything about that or?

Rajendra Shroff:

Yeah, there's a bunch of ways we can tackle this situation. Uh, tackle this question, like obviously if you target somebody like journalists whose job it is to foster public conversation, uncover truth, sometimes inconvenience, what does that do to journalistic discourse? Is it, isn't it the same as. Targeting somebody physically with threats of physical harm. Is this just a new type of offensive weapon, uh, against these people? So we can get into that conversation. Um, but in, but in the case of, um, Rana Ayyub, you, because what happened to her, let's just think about this in a general sense. Now, this could happen to absolutely anyone. They don't have to be famous. This could be a jilted boyfriend that's making these kind of videos about, about his ex-girlfriend, somebody who does not have the fame and influence of, um, of Rana Ayyub. Who, how, how do they handle it? So what kind of safeguards can we put into place as society, as technology companies, uh, even as policy makers and regulators? So for me, this is the general case that we have to really spend time thinking about as, as human beings, because what if this happens to somebody in our family? Um, that's the frame we should go about it. What kind of tools safeguard to put in place? Very hard question. If I had the answers, I'd be on the policy board of, uh, some organization. Um, I do not pretend to have the. Johannes Castner: The And the other problem with that is the speed at which you can even start thinking about, uh, the problem and the speed at which things evolve politically, technologically. There's a mismatch, I think there, and I, I've discussed this already with a, a previous guest, um, Olivia Gambelin, a few episodes back. Yeah. I mean, if we were to brainstorm solutions, keeping in mind. Technology moves at a completely different speed related to society and regulation. Let's look at these different, um, areas and how they could put in tools to prevent this kind of harm, um, from a technology platform standpoint. Uh, already Facebook has policies that says deep fake videos are taken down as soon as they're reported, unless they are specifically marked as deep fake, or satire. Um, I believe most platforms have similar rules. Uh, if not they will. So this is one level of, um, one level of safeguard.

Johannes Castner:

Yeah. But then is probably gonna, their GANs have to be better than the generators. So, so I mean, going back to the GANs, right. As you described, the process of how this works. Yep. Facebook GANs, the detector, the lie detector has to be better than the faker. Whoever produced the video and, and this is basically becomes an arms race then?

Rajendra Shroff:

Yes. So you're getting to the crux of, crux of the difficulty here. Uh, but where Facebook can, um, come out on top is just by leveraging its user base. When people report a video as misleading or fake, that's already like another trigger, uh, where Facebook can look at and take action. Um, If you take this to the next level, can we perhaps take videos and digitally watermark them where certain, where if the original video is easier to detect, um, is easy to identify as original and a fake fails test. So from a platform standpoint, there's a lot of, from a technology standpoint, there's a lot of tools that are coming up as part of that arms race,

Johannes Castner:

but I have to push back on this a little bit. As I'm editing a lot of videos, I now know that, you know, they're not the original video, but they are the original videos, right? So, so for example, this conversation here, I will maybe do, I will have to put our faces sometimes next to each other, sometimes not, and so on, right? I have to do a lot of editing to it. So that could then be detected as an, as an altar, uh, as a, as having been altered when in fact it was true, right? So even regular. Can be then misleadingly seen as a, uh, when you do these sort of stamps. Cause if we do a stamp of this video, I'm not going to put it out exactly as it was filmed that will be, you know, the music will be cut into the beginning and, you know, there, there are things that have to be done to it. And so then will that then look like it's a fake?

Rajendra Shroff:

Uh, I hope not because from what I've heard of these tools and, uh, I haven't looked at the technical detail, but they're looking for, Type of digital alterations. Uh, maybe a lot of alteration around the face area because that's, um, what deep fake videos tend to focus on. Superimposing somebody's face onto an existing video. Um, I would love to actually sit down and go down this rabbit hole, say, how do these detection technologies work? How do these tools work? What are their shortfalls? Um, But hopefully these get better and in line with, um, the ability of deep, big creators to create very convincing videos.

Johannes Castner:

Yeah. Wow. Yeah.

Rajendra Shroff:

That's not a tech side, but what can, let's say lawmakers and regulators do. Um, so I've heard, uh, lawyers refer to the legal framework as a blunt instrument, so you can apply it, but if you apply it to liberally or too forcefully, you really quash any kind of technological innovation. Um, the instinct of, uh, some governments to ban, um, certain technologies such as even end-to-end encryption in, um, text messaging; it's probably creating more problems than solving. So what can regulators do? And honestly, there's the pace is slow and there's so many different ideas. Um,

Johannes Castner:

and also the problem is that there are many different types of regulators, right? It's not just one. It's, uh, the US is very different from Hong Kong, for example. I mean, Hong Kong is now part of China. So that's, um, these, these are, um, You know, the Chinese regulators might feel very different, and in fact they might be much more radical about it. In some ways. They might have much more power to make decrease as to what can be built and what cannot be built. And they might have a very specific philosophy. And I am not actually sure about this cause actually, do you know, do you know something about that since you live inHong Kong? How does China feel about, um, how does the Chinese government,

Rajendra Shroff:

about this specific area? About this specific area? I'm not entirely sure. And to your point, like every, um, legal framework has different approaches, but what most legal, um, frameworks have in common, which most jurisdictions have in common, have in common are laws against identity theft. So it's realistic. It's feasible to, to apply these laws that protect against identity theft into the realm of deep fix. So at least there's some kind of enforcement that is backed by existing legal frameworks.

Johannes Castner:

So, well, as long as the government doesn't do the, well, what if the government, I can see for example, if the Chinese government wants to discredit someone who is speaking out whistleblowers and so on, then they, there's no one going to stop the Chinese government from putting out propaganda, essentially.

Rajendra Shroff:

Well, it can't stop any government from putting out propaganda. So then we can look at the third pillar of, um, what kind of agency can be given to individuals to really take control of their image online. Um, currently, if somebody uploads a video of you to a platform, You have to go to the platform, contact them and request the video to be taken down. This can be a quite a difficult process. Um, sometimes a platform could say no, especially if they're not required to comply with this in a legal, in the, in a legal framework. So bringing this kind of back to the regulation aspect, it, it makes sense for platforms in major jurisdictions to be. Um, to be compelled to at least, to at least look into these things. So when a user approaches, let's say YouTube saying, Hey, there's a video of me posted online. It's my identity. Here's me proving that it's actually me. Uh, I want this to be taken down. If there's this clear framework and a clear process to process these, these kind of takedowns before the video spreads too far. That's already a step in the right direction. So we have legal frameworks, we have technology tools that detect deepfakes and also giving some kind of power to the users. It may not prevent deepfakes from being made, especially malicious deepfakes, but it could also, but it could, um, prevent their spread and it could give some power and some relief to the victims that, yes, I can at least go and do.

Johannes Castner:

But in, in authoritarian regimes that are big enough and technologically advanced enough, such as China, where they have their own social media platforms that perhaps are at least partially controlled by the government. In those scenarios, it seems that there is no recourse at all, and, and propaganda can be taken to a level that has never been seen before.

Rajendra Shroff:

Well, that's already happening in many countries despite the media, um, if somebody writes, uh, a hit piece about somebody in the media, even if that's proven to be, um, not fabricated, but at least exaggerated, these tools are already being used outside of Deepfake Tech. So we still have. We had to just rely on existing social, um, social tools to kind of figure out how do we deal with this when we are hit with this kind of information. Um, and in terms of how specific, how people in specific countries can deal with it? That's a tough question. I don't really have an answer. Um, if we look at, well

Johannes Castner:

then, okay, the, the big elephant in the room here: it's also, you know, when we are talking about YouTube and so on, but there are these pornography platforms. So I have to talk about this because it turns out I think it was a digital tracing service. I, I will have it in the, in the, in the, um, description section. I'll have a link to it. Um, has found that 96% of deepfakes currently, um, are in the pornography industry are part of porn, uh, sex films basically. And these are involving celebrities, people who don't necessarily, who have actually, uh, put a lot of value on their faces, particularly. So, so these are people, actors, and actresses and, and, uh, particularly actresses, I would say, um, in a lot of cases. Whose face has a lot of value to them because it's their recognition, it's their brand, it's part of, it's, it's a particularly important element of their career. And then it's, it's taken, it's their face is now taken and put into a sex film that they never made or consented to. And this is 96% of the action at the moment when we're talking about, uh, you know, deepfakes. So this is a huge elephant. And, and this is not YouTube and Facebook, and. So the, the, the, the big, uh, cooperations that are not in the porn com, uh, organization, but really the most of it in this, I guess, companies like, uh, I think they're called PornHub. Um, they, they are, um, they are basically, they don't have these rules because, uh, you know, there, they're, in those cases you can find these, uh, videos of, of celebrities.

Rajendra Shroff:

Okay, so how do we combat this? Um, I spent some time thinking about this cause I know we had this conversation even before the recording, but let's just look back a few years. How did the movie industry, the music industry, um, combat piracy? Because when we think about somebody's face, although it's a very personal, intimate, um, intimate topic to that person, it's basically, um, protected by intellectual property, by trademark. It's these things. Uh, celebrities faces are protected assets, they're valuable assets. So is music. So are movies. What did the entertainment industry do in the case of Napster? In, in the case of, um, people uploading movies on the Pirate Bay, first they tried to go after the platform in the case of Napster, It worked; They shut it down. In the case of people torrenting movies on, um, on the Pirate Bay, for instance, um, they went after the specific people that were illegally uploading this content. So when it comes to, um, deep fake pornography, the celebrities and the companies that back them, maybe they can take a similar approach, first approach, approach the platform and say, take this down. This is, this is like a violation of, um, violation of intellectual property. It's a violation of, um, a trademark asset, which is this person's face. This is actually identity theft. Uh, if you do not comply, uh, either we take an enforcement action against a platform and at the same time go after the user that uploaded the video in the first place. So the playbook in my mind has actually been established as people were downloading music in movies. Um, are people going to approach, um, malicious deepfakes or pornographic deepfakes in the same way? It's possible if they take another approach. That's also interesting to see how it'll play out, but nobody's gonna take this sitting down, least of all the people or the victims that are put into these movies against, uh, without their consent.

Johannes Castner:

Yeah. Wow. And but then the, the, the thing in the globalized world where you could be on some island or you could be in some place where you, where the jurisdictions can't get you, and maybe that's where they'll all be coming from. And because of the speed at which you can create them, you can create massive amounts of them. If you're one person, right. With this, with these tools, is that not right . So it's very, it's a very, very tricky problem.. Rajendra Shroff: So, I mean, this is it's, I wish I could just come and say, here's a solution. Um, but there isn't, because like you said, the tech itself is accessible. It's getting better. So do, do, does law enforcement rely on a few high profile cases and busts that they make. Um, how do you minimize the real, um, psychological pain from this going forward? Uh, not a lot of answers, but my view, just going back to it, uh, you need to have laws, the legal framework working with improving tech tools for detection. And just giving people the power to like, take control of their own image online, giving them the tools and, and the, and the, and the legal process to say, Hey, this is my face. I want control of this. I want sole control of this. Please take it down. Yeah. Yeah. But then, okay, now we have to, uh, Again, on, on this show, we've talked a lot about the blockchain and decentralization and things like that, and one of the objections to using the blockchain, uh, by, by Tim Burners Lee, was that, uh, you know, one, one is privacy and the other thing is this permanency of your data. So if you, if you have a blockchain and you have kinds of pornographic fake movies that you, you, you can combine these two technologies to create something that can never be taken. That you see that that's, that there is this interaction between these technologies as well that we should think about is really, I mean the, the, the problems multiply once you look at them. You get a sort of sense of, of vertigo of where we are and what is happening technologically. And a lot of the, I mean, these things are wow. But, but let's also talk about, I think it's also important to, to talk a bit about what can be done with these technologies. That are not destructive. And it is true that there is a sensationalism about this as well. We, we should, we should also talk about opportunities in this. Are there any opportunities in this? Can we do something good with deepfakes?

Rajendra Shroff:

We definitely can, and to an extent we already are. So if we think about the positive applications of deep fakes, digitally altered, digitally created images and videos. There's actually a lot of potential just in entertainment. Um, you could have, let's say a celebrity that's perhaps aging, but wants to lend his image, uh, lend his reputation to a new project. It could be as simple as filming a commercial or some, an actor that has been recently deceased, can reappear in a movie, maybe not for the entire movie, but as a guest piece. So there's a lot of potential here, um, in the entertainment sector, and it's just fun to create, um, videos that you can share with your friends. It's playful, it's social, it's actually creative content. We, if you look up examples on YouTube. There's so many. Um, there was a YouTube short where you had a deep fake Tom Cruise, uh, kind of announcing his, uh, new relationship with Paris Hilton. Obviously the two of them were not dating. It would be fantastic entertainment for the tables if they were. Um, but this kind of video gets a lot, it just generates a lot of interest and it's good for the careers of these two people as well. If you watch, um, America's Got Talent, um, the, the singing the Music Show, they actually brought back Elvis to life and had Elvis do a live performance on the show. And the tech was actually brilliant. So you just had some guy who was singing, um, in the style of Elvis, but they were filming him and casting his image on the big screen and digitally altering his image in real time to appear like a younger Elvis. It was fantastic! Entertainment, huge number of applications. Another area that I'm quite bullish about is just in marketing. Now you are starting a smaller enterprise. Um, I've worked with a quite a few small businesses. Marketing is expensive, but with deep fix, what you have are automatically generated models. Um, That look human, that can be used in ar, that can be used for advertising, that can be used for marketing campaigns, um, at a touch of a button and for something like $20 a month, I can subscribe to a business. Uh, that can allow me to generate as many faces of people as I would like, and I can use them pretty much as I see fit. Now these faces are not of real people. That's, that's the distinction here. So there's no, there's very few ethical issues. But think about what kind of, what generated images and generated videos can do for a smaller, medium sized company that is trying to market itself with the bigger companies that have a much bigger budget and a much bigger global reach and awareness. It kind of evens a playing field. It makes content creators, whether you're a marketer or a YouTuber, it just boosts your creativity by like 1000%. With deepfakes, you can generate many variations of a piece of art or a piece of content, or an image or a video, and then just fine tune it. So your work, the time you spend goes down because you leverage these technology tools and the money you spend also goes down. So overall, from a business standpoint, I'm actually quite bullish on deep fakes being applied in a good way.

Johannes Castner:

That's interesting. I, I also, I mean, it also has some implications of what art will be in a way. Uh, so that, that probably hasn't been discussed quite yet Very often. The way I think of art is as a collaboration between different humans and now you can create movies where you don't need humans and it may take out. That there is a lot of psych psychology in art, right? There's a lot of, um, I see it that way. I mean, there is, art is, is a sort of a human thing and it starts becoming more and more cyborg if youll, you know, but could speak a bit about that. Why should we think of it still as art? Why if it, but I see also, I think there is a missing piece here that a lot of people. When they speak about this topic are, are not considering, and that is that it still is the human being who is the director, so to speak, and maybe there are multiple of them as well. So it's not, it's not exclusive. Right. So they're having a bunch of, a bunch of, um, say deep fakes doesn't preclude you from also having real actors. Is, is that right?

Rajendra Shroff:

Yeah, absolutely. Um, there's has to be a balance because as consumers of art, whether it's in movies or actual artwork, it depends on what people find compelling, what people find attractive. Would people pay top dollar to watch an entire movie that is deep peak generated or completely cgi? Perhaps they would, but would it attract everybody and. Especially when we look at something like arts, I think the cat's outta the bag in terms of, um, image generation.

Johannes Castner:

You, you already have to Yeah. But there's a dimension to art. There's a dimension to art that this, it's a sort of a market based analysis, the way that you just described it, but there's a dimension to art to me personally. There's, there's a lot of music that I love, that I think is of high artistic value, maybe not of high populistic value. So for example, I am myself, am not a fan of Britney Spears. I, I don't dislike her as a person. I don't wanna say anything negative about her, but I don't like the music. I cannot see the, I cannot even see what attracts people to it. Um, and so therefore, and, and of course many more people feel the opposite as I do, but there, I think that you could say the. Has a high artistic quality and I think that people who know a bit about this, they will agree, but they sell less of them. So it's not, I feel that there, it's not really true to say that good art is the kind of art that people validate. I think that, that, that popularity is not a good validator for what is good art in, in essence. And so I'm wondering, can you make something that is of high artistic. That involves deep fakes. Is there something already been done like that where you could say there, there are artists who are using them in new ways to express something new or to express maybe the spirit of our times?

Rajendra Shroff:

Yeah. Let's talk about an example that's quite recent. So there's a well-documented case offered digital art show. Um, it's a contest. So you create a piece of digital art, you enter it into a contest. Best piece wins. Pretty straightforward. They've had this show for many years. Turns out one of the digital artists used a tool called Mid Journey to create a piece of digital art, and it won first prize. And this actually upset many people within the community. It actually upset everyone. And if we dig deep into what actually happened, sure, the piece looked great, but the artist, it's himself. Used a tool called Mid Journey to use a text-based prompt. And the, and the tools spit out a digital piece of art. What the human artist did after that was upscale the image, edit the image on his own. So, but still, even if you say that, the human is still doing some work, the genesis of the piece was a machine. So in my view, When I say digital art, it, the cat's outta the bag. It still has a place, there's still a place in, in the market for human artists. Cause if you know something is created by

Johannes Castner:

Yeah, of course.

Rajendra Shroff:

By digital tool. You can't really connect to it emotionally. You might say, Hey, that looks fantastic. And it might, but there's no emotional,

Johannes Castner:

no. Cause it isn't an expression, right? Because it isn't an expression of experience. So art to me is always associated with an expression of a human bleeding, human experience. And, and that's the whole point of it, right? So if, if, if you, if you take that out, For example, uh, AI music is absolutely unattractive to me. I have to say, I, I listen to a bunch of it. I, I, at some point, I, I was interested in how it can, how, how good it does in mimicking something that I, uh, and it isn't yet quite that good. Maybe they can perfect it at some point. It'll really sound like a new Mozart composition or something like this. It's possible. Um, but, but it'll not have the same affect, simply because it isn't a true expression from a human being, but can you, as a human being, express yourself in new ways? So that, that's really what the question I'm after is sort of say, can, can you have human expressions that are truly human but that use, that make use of these deep fakes as a new type of brush? If you. Can you go that far? Can you say, well, I'm looking at this thing, it's actually at the heart of it. It's a human expression, but it's using these tools as part, not necessarily as the, as the whole of it. And so there's one question of generate the whole thing by using ai and another to, to use AI as part of your tool kit, if you will, to create something completely new that is still a human expression.

Rajendra Shroff:

Now this is definitely gonna happen and it's happened even before this whole deep fake thing took off. I remember in 2019 I went to this, um, digital art show in New York City and it was basically a room floor ceiling with, um, monitors and there was like very abstract art in moving form. Generated on screen, a bunch of images, mishmashed together, a lot of flowing colors. And it turned out that that piece was created in part using machine learning tools, but really the conductor, the architect of the whole thing was still a human being. So here it was a, it was a fun piece, it was a good experience, and everybody was aware that this was part tech, mostly human and this kind of thing. It's just gonna keep going. So if we think about just music, the way music has evolved, I'm sure a purist would be very much against an electric guitar, a purist, jazz musician, , they have been, what is this electric? They have, why do I, what is this pedal that goes blah, blah, blah. This makes no sense. Absolutely. Um, why

Johannes Castner:

they, they gave, uh, they gave ruining

Rajendra Shroff:

the music, but

Johannes Castner:

still it's, yeah, exactly. Well, I mean, they gave a lot of hell, Herbie Hancock for. Yeah, Herbie Hancock got a lot of help for, for, for being, you know, for doing something funky. To, to have a funky beat as part of jazz was considered not jazz. Can't be jazz and so can see. Yeah, exactly. That's the sort of thing I'm getting at. Something you've already seen where, where you would feel that that sort of spirit, um, of, of what you just said is, uh, already been applied, where, where were deep fakes in particular have been used in this sort of, in this sort of way, avantgardistic way you could say.

Rajendra Shroff:

So there's plenty of example of digital art, but that perfect balance that you're talking about, I have not come across a famous example yet, but this is coming as a tech becomes more accessible, more people know how to use it, and an existing artist, whether mi, they might be established, they might be., he might have this aha moment saying, okay, I can use some of this, but still retain my style and still add something on top of it, which I have never done before.

Johannes Castner:

Well, I challenge every viewer who sees this video and who is an artist to, to take us up on this . It doesn't seem to exist yet. So here's your Andy Warhol moment I suppose. Well, now let's talk a little Yeah. It's like, well, he broke the rules, right? He, he Oh, yes. Made art in a way that Yeah.. Exactly. He broke, he, he redefined what art can also be, and in a way, this could be done here. I think there's a huge opportunity in this, uh, with the deep fakes to, to, to basically prove the people wrong who say that you can't use AI to make real art. Here's an opportunity I think in, in. And so to go back to the business opportunities, well, also with all of these business opportunities you described the ability to make cheaper. Um, for example, let's just go with the cheaper marketing one. Mm-hmm., I, I surely could use it. You could use it, right? We could use it and we could benefit from it, but at the same time, it does cut out jobs on the other by an actor. Right. So, so, so, I mean, so, so you're taking from one person and you're giving to another. Isn't that what's happening in there, there? Or is it really a,

Rajendra Shroff:

In an ideal world, no, because, uh, if we look at the short term, medium term, the biggest users of deep fake tech for marketing material to generate marketing material are companies and even entrepreneurs, they cannot afford to hire actors anyway. So you're actually addressing a new part of the market who could not afford to hire a human model. And even when we look at big companies, the argument is, hey, if they can save money, they'll do anything to save money. They'll offshore, they'll cut costs, will won't. They stop hiring actors. And if we think about what, if you go back to the human emotion, the human connection part of it, when we look at a great commercial, Or billboard, if there's a real human being there, we know it's real. It's moving like a real person. It's kind of even a celebrity that we recognize. That connection that you make to human, just to your customer at the moment is not possible using generated images and videos. Is this

Johannes Castner:

the important, is this boundary, same word being, well phrase being at the moment, right at the moment, boundary gonna be very quickly shifting.

Rajendra Shroff:

Yes, yes. Yeah. Um, so I guess in the long run, if we look at it, these defect generators for marketing materials just gonna be another part of the marketer's toolkit. It's something that they can use to prototype a bunch of different ideas, a bunch of different designs. Um, we're actually gonna see pro product managers use this as well to like, create new styles for chairs, for example. Uh, instead of designing everything each specific. Iteration from scratch. It's gonna be part of the toolkit. It's gonna turbocharge creativity. But until now, every new tech advancement that has come out, whether it's um, internet, social media, we've always said it's gonna take the humanity out of society. So far, that has not really happened, and I'm,

Johannes Castner:

I'm not sure I'm skeptical Yeah. About the weather or not. You have in some ways already, , but then because we, I think we have a quite sick society when it comes to a relationship to nature, for example, I think, you know, to, to, or, or you know, just the way you, you go to an airport, you can see. The way that the airport looks, the way that it's designed, it's is that for humans?

Rajendra Shroff:

Architectural issues, the evolution of architecture from your, your classic barau kind of designs to streamlined, almost like fatalistic disaster today. That's a fantastic topic. Um, I mean, there, there's definitely a point you're making there, and I, I actually take the point, but if you look at how humanity has evolved, it has always been through human. So can technology change that or completely sever that connection in one or two generations? I'm not sure. I certainly hope not. But then,

Johannes Castner:

well, you have deep fakes. You can interact. You have on top of that, you have the metaverse. You can, you know, you can imagine you are interacting with all your favorite fake people, and that's potentially your interaction at the end of it. That would be, A possible implication. Quite dystopian one.

Rajendra Shroff:

Yeah, we think if we think dystopian aspect of it. Um, actually one in one of your previous episodes you likened, um, existence in the metaverse, just living in a digital dollhouse. And I remember that term because I absolutely love that term. When we look at, if you look at deep cakes and project it into its most, um, uh, fantastic or even dystopian future. People can generate very personalized entertainment for themselves. It could be in the form of movies or TV shows. Now, add this layer of virtual reality into it. As headsets become more affordable, you can even exist in your own world where you can kind of control the environment. Everything is per hyper personalized to you. Where a real world relationship or a real world Interac. Is both unenjoyable but also stressful because you have this other digital existence that is, that just makes it comfortable. Now, as we move towards the technical capability where this is possible at scale, then we we're gonna have this conversation about what have we unleashed? But thankfully we're not there yet. And I guess. I guess it's good that D f C is D defects are getting a lot of bad press in terms of its malicious applications because in a way I think it's good that we're overreacting. It's actually

Johannes Castner:

good as we should speed up the process of, yeah, of, of slowing. I mean, we're all kind of, yeah, I think a little bit of a freakout is warranted here. I I, I, I am absolutely for the freakout. Cause it does, it is disturbing. And also on this way, so if you combine the metaverse as you just. With deep fakes, and you consider a lot of young men might have problems with dating young women. There's this discussion we have to address, and now let's say they can have something that feels like a real romantic and sexual relationship with a completely fake person of their choice. What will that, what kind of results? I mean, this gives me vertigo. That, that, that kind of consideration gives me vertigo. What will that do to society? I think already pornography has done a lot of damage and, you know. Yes. And, and, and now we're talking about, about the metaverse, where you can maybe have sexual experiences in a de personalized way. And now we're adding to this the dimension that you. Both. If you combine, let's say the capabilities of deep fakes with ChatGPT, for example, and the generation of voices and anything you want from whatever chat G P T say generates then or something like chat, G p T, it probably won't be chat G P T. But in any case, you combine these two ideas, then suddenly you can have a relationship with a completely made up person that is exactly how you want them to be. Yes, that's all the things that you want them to. And you have no more need to, uh, and, and I mean, of course I, I don't subscribe to this. I don't think I would fall prey to this, but I think a lot of people would. And suddenly you don't have the need for personal relationships anymore. Human relationships.

Rajendra Shroff:

Yep.

Johannes Castner:

And that is a, that frightens me. And so I think that there is, is in some ways when you think of tho the implications of that and you think of the logical conclusion, . Yeah. I think there is some reason to be

Rajendra Shroff:

It's even more frightening cuz I'm, I'm one of the people that holds a contrarian view that we're actually in a population crisis globally with aging societies falling birth rates shouldn't be worried about overpopulation. They means specific regions for overpopulated. But as a whole, we're heading to a space where we're not creating enough new people to replace the existing. and what does that do to human productivity? Yeah. What does that do to do to economic productivity? Countries like Japan are already kind of grappling with this right now. Yeah. So if we add this utopian digital existence where you can exist on your own terms, you don't even have to go through adversity. of making it in life.

Johannes Castner:

I find it dystopian, frankly, . Yes. Maybe some people think of this is a utopia. Yeah.

Rajendra Shroff:

You're describing a ready scenario where you're completely entertained online. This is actually horrible. So how do we as a society kind of push back against this? Like we haven't had this conversation enough. And I believe as like especially VR becomes more accessible, we're gonna have this conversion.

Johannes Castner:

Yes. Well, I hope we, we, we see it would be good to have it before it starts becoming accessible, uh, because I think that once it becomes accessible, we will be running behind one problem after another. And those will have Yeah. And the problems themselves that we will see. Uh, arise will have enormous costs. So if we can avoid them Yes. Ahead of time, if we can think about it now and start the conversation now, which is what I'm trying to do here. Um, and thank you very much for taking me up on, on this as well. Um, I think that what we're doing here is, is, is something that's a little bit prescient, I, I looked on YouTube and I couldn't find this kinda discussion that we're having here yet. So, um, this is, um, and, and I think it is very important that we have it now. So I'm just saying that, and, and the implications are really tremendous. And also the, the, the solutions are hard because the sheer amount of money that goes into developing more tech to even make these problems come faster in a way, uh, is, is spent much faster than the money that is being spent on trying to solve the problems that will come from them. Is, is that not also a dimension that you see? I mean, the money goes all into the things that you said, marketing improvements. It comes from these areas but would you say, let's say we do a utilitarian calculation and we said we want to maximize the benefits, um, and minimize the costs of this, then I would say that currently. We're focusing on the benefits in terms of the money that we're throwing at the problem. I mean, I mean, I'm talking about we as a collective. The money that's being spent on, on, on, on it is, is mostly in maximizing the benefits and not in minimizing the cost. And I generally, I generally am in favor of this because I am, I'm a technologically positive person. I, I believe that we should do good things in the world and maximize, you know, the good that we can create, but, uh, in this particular case, I, I'm not sure I have this feeling that the costs are pretty tremendous yet. Uh, and, and we haven't a financial way to address them, but basically we haven't spent the money on minimizing them. What do you say to that?

Rajendra Shroff:

I say that you raise a really valid point. We should think about this not only from a business financial cost standpoint, but also social cost. It's like how do we develop these systems and tools that ultimately improve human lives? Um, right now we're creating these tools. We're investing in these tools to maximize engagement online, maximizing engagement, maximizing usage, and maximizing the sensational aspect of these things. Entertainment, can this be used to actually improve our daily existence? And that's hard to put a dollar value on that. But in terms of the future of, um, at least our children's generation, that's something that we really must think about.

Johannes Castner:

Yeah, I agree. I agree. I mean, I even think that it's, you know, our children. I think by the time that my children are grown, this, this problem will. I don't know. It, it's moving so fast. This is the thing. I mean, I think it might even affect my generation still and, and, and yours, you know, not even that far in the future as I'm seeing it coming. Isn't that right? So how, how fast do you think all of these, let's say we, let's call it an inflection point, as some people do, there's this term there, deep fake inflection point. You maybe wanna explain that term for a second. Like, just to do, do you, do you know thisterm? The deep fake inflection point?

Rajendra Shroff:

Um, go for it. Let's, let's talk about it. What do you think it is?

Johannes Castner:

So the, the inflection point, the the inflection point, um, of, of, of deep fakes is a, uh, point at which it is as easy to make a video of, um, of a celebrity as it is, of making a video of yourself on your cell phone. So you can imagine you could just. Point the cell phone at you and then you are replacing your cell, your face with that of a celebrity to you. Then you can take not what, I guess now we call it selfies, then, I don't know what we will call it, , but you can, you can project your, uh, any, any other person and not just celebrities. So this actually now, okay, then, then it might, this has enormous consequences, right? If you can think of this. So we have the case of, of Rana Ayyub, which by the way, I wanna just cite where I know this from. And this is from, uh, Ted talk, uh, by Danielle, um, by Danielle Citron. And yeah, I, it was a very riveting one. I will have that one in the description. Um, but so yeah, to come back to. Anyone can now do this to anyone. So this, this for one thing. One, one implication of this is. If a, a disgruntled employee. So we had the case of, of, of powerful people, uh, uh, applying it to someone who's less powerful, but uh, has a profession like journalism. Um, where, where, where a lot rests on your ability to be taken seriously. And then if you have the sex tape and so on, it can have huge consequences. We have that case and now we, we can also think of a case where maybe a disgruntled employee. Will now apply this to their employer. Uh, it, it can go both ways. So you could say then in some ways it levels the playing field in, in terms in terms of that, and it's going to be extremely easy to do these things. So I'm talking about, you know, I don't even know that the word revenge porn is the right word. because I think it's not necessarily revenge. It can also be suppressive and it can be used by authoritarian regimes if it's possible to actually suppress it from the lower. So, so you, you could have a hierarchical element to it in places like China where they might make it impossible for you to do it, but then the government will do it all the time and, and it can be, uh, in our society we will., uh, uh, you know, from the bottom up. Anyone can do it against anyone. And then what, what the, what? Of course, the logical conclusion to that is once we all know that, then maybe the entire internet will be completely worthless from the perspective of information sharing. All of these things are happening, right? So if you con consider this whole chain of, of, of effects, if you will, mm-hmm., you can think that at the end of that, it will just mean that the, the internet, which was built to inform us suddenly will be completely worthless. From that perspective, it will be an entertainment place, but it will not be one of valid information because you cannot distinguish between the real and the fake.

Rajendra Shroff:

Okay, so let's look at the, um, so you raised some interesting points in terms of deluging the internet with generated content. Despite being high qualities of very little informational relevance. Um, so that's, that's an important point. And also you're talking about essentially stealing people's identities and producing content now. Yes. When it comes to flooding the airwaves or the internet with nonsense, we kind of already see that today in terms of spam emails Yes. Um, or in terms of bot generated social media posts. Ultimately we, this is the next generation of that. Yeah, this is the next generation of it. So what are we doing today to deal with current problems? Uh, the tech tools can ring fence, um, spam because if spam detection models are becoming very good, we actually have machine learning driving spam detection models and training them to get better and better over time. Um, when it comes to bot generated social media posts or comments, at the end of the day, if something cannot captivate human attention repeatedly for long periods of time, it fades into ire irrelevance. Now, there's an issue where even if you're on Twitter, you, you have a Twitter account and they say that if you have less than 1000 followers, You're actually just screaming into the void. Nobody's looking at you or paying attention to you. Um, and even in this, even in the current world where you have so many content creators, uh, that are independent, you still see that the established creators, or they're reputable voices, they still hold a lot of sway. They commanded a lot of attention. So in an optimistic scenario, even when deep fakes can generate so much content much faster than human beings, Can even generate a lot of fakery. Hopefully this, the human brain filter that we have, the, I like to optimistically call this just the constant search for relevant information or the constant search for something meaningful could quite organically filter out all of this content and just focus on what, um, essentially makes the world better. Essentially makes our lives more.. So that's my optimistic take on it.

Johannes Castner:

I'm a little bit of a pessimist there. Yeah, I'll, I'm very pessimistic to that to say. Well, I mean, just the, there's um, there's a comedian in America called Bill Maher, and he has pointed out the, the, the number of followers that Greta Thunberg, for example, has Yeah. Compared to the number of followers that, uh, I don't remember this model's name, but some model who travels around the world with a private jet and so on has, is, is she, Greater Thunberg following, and, and then this goes, you know, when we, we've seen this in the election of Donald Trump and, you know, various, let's say not so democratic. Politicians around the world, um, that, that people have been duped quite a lot in already without deep fake. And so, as we were saying before, the confirmation bias. So there's confirmation bias that we have. So I'm, I'm, I'm worried that the confirmation bias is a stronger force than, than the, the, the filter for, for many people. So not for everyone. And I think there's diversity in that, but the, this visceral experience of a video, I, I feel that it can, er, you can feel quite immersed in that already in, well, you, you can add the third dimension to this and the metaverse and, and you can feel even more immersed in this and this visceral. thing. It seems to be an instinctual thing that, that we evolved, that ev, that evolved for, you know, in an environment where what you can see, what you can feel, what you can experience is not fakable. See, I see it often from this evolutionary perspective. Yeah. We are in the end an animal . Yeah.

Rajendra Shroff:

So fortunately or unfortunately, deep fakes videos are not at that level yet. But as they slowly get to that stage, we're gonna see these small, almost like, uh, public experiments of how captivated people can be by this content. Um, since we don't have the answer now, it. Fairly important that we pay attention to how we react to it as a society when this does start to happen, um, could there be a specific, very well time deep fake that is released as a, a lie or disinformation that could flip an election of the right video or release at the right time during the close election? It's possible. Um, I suspect that in terms of how regulation and enforcement from response to this, that response will only happen after the event, whatever the event may. Um, so it's going to be quite interesting to see how this plays out. At what speed? Cuz you mentioned the inflection point where it becomes very easy Yes. To create this content. Will society react before this inflection? or after when it's You're essentially paying, playing catch up.

Johannes Castner:

Yeah, exactly.. And then, you know, this inflection point, you know, so that doesn't only have these grand political implications rendering essentially potentially helping process of making contemporary political systems more and more irrelevant in some ways Also. Dangerously incredible, not credible. Um, but aside from that, that's, that's already a very big one, I think. But aside from this very big one, there is this other big one, and maybe you could speak to that also, is that. Just the peer to peer influence, if you will. You know, just about your friends and if you, if they have an argument with you and who knows, they might overreact, parti, particularly if they're in their teens or something, and then they might regret it later, but it's in the blockchain. So there's the information that's stuck there now and, and you know, just the. The, the bottom up, even if you will, like the everyday people who can, what they can do to each other now with these, you know, sex films, for example. And then we, we add to that, uh, certain religious or, or, um, sentiments that, uh, make sex films particularly taboo and particularly dramatic and so on. You know, if we add all of this and we can all do it to each other. Um, so like the first implication is that there will be this, you know, we will do it to each other in, in maybe selective ways, and a lot of things can break, right? They're step, they're set step, and then there might be this step where we then make the entire informational internet and all of that completely irrelevant. That's the second step., I'm wondering if you could well speak in turn to each.

Rajendra Shroff:

Yeah, so a couple points there were, I can pick up on. So in terms of it becoming a free for all where everybody publicly damages everybody, um, this has been possible for quite some time even before the internet, where if you tarnish somebody's reputation, they may not have the agency to. Fix it cause they did not have the communication capability that the person hurting them did. There's actually a very interesting, it's a fantastic book called Heart of Darkness by, um, believe Joseph Conrad, um, where he talks about just the veneer of civilization that keeps us from turning on each other and becoming animals is kept in place because we still are worried about the opinions our neighbors have of us. And this book was written decades ago. This, this, um, worry that people have, that we, they don't want to be judged poorly by their neighbors or their peers. Still generally holds in a digital age because how many people are truly anonymous online? Quite a few. Very few actually. And. Nobody wants that coming back to harm them. So there might still be this social mechanism that keeps social decorum in place. Now you also mentioned the blockchain where stuff can be written to the blockchain. Quite permanently. It's stored in a regret five minutes later. Yeah. Yeah.

Johannes Castner:

And how do you take it down Exactly. And, and people have hot hats, you know? Yeah, yeah. Well, you can't, right. It's, it's there forever. You can't take it down. Yeah. So the thing is, you, you see, when you're speaking to these long-term sentiments, I agree with you that people probably, when they think about it and so on, but in the, in the heat of the moment, somebody said something to you that enraged you. You went and you made. A sex film with them in it and you put it on the blockchain and you're like, ah, that's will, that will get them. And in that very moment, I think that we are also impulsive. Humans have impulsive instincts, and we do some things out of impulse, not out of deliberation. And if that, there's actually an excellent book on that, uh, by Kahneman. Daniel Kahneman thinking fast and slow, that addresses some of that. And so that describes our psychology in some ways. And then there will be, it will be so easy to do, it will be very fast, it will be quick and then you can upload it to a thing that will never, where you can never take it down. And of course these are the things you will do in rage and yeah, that's, that's what I hear.

Rajendra Shroff:

So the reason I don't worry so much about the blockchain aspect of things is because if you look. Very few blockchains are actually decentralized. There's a large element of centralization to large blockchains that are trying to gain a lot of users to gain a lot of utility. This. Includes blockchains that are trying to build out file sharing or file storage. And if we think about it, when we look at torrents, we already have decentralized files, file storage, well before blockchain. You could upload a video of anything or any content. It's stored by whoever downloads it and they can choose to share it. Uh, how do you prevent the worst excesses from occurring here? Law enforcement can go after individual people that are sharing this content. Uh, with a blockchain, especially if it's centralized, you can go after whatever that central organization is. So when we have, if we ever have true decentralization of like content sharing, then what you're saying becomes an issue. But in the world we have today, it's very hard to, we talked about the blockchain. Tri blockchains can either be decentralized, scalable, and secure. You can have two of these, three can't have all of them, uh, in order to be scalable and attract a lot of users and throughput and transaction capacity, most blockchains give up the decentralization aspect of it to some. Which is kind of what we're seeing playing out. We're really the only blockchain that is quite arguably decentralized is Bitcoin, the large blockchain. And we're definitely not using that to upload videos or like audio. So definitely something to worry about. But in terms of social mechanisms, there are something and technological limitations, um, may prevent some of the really bad things from happening.

Johannes Castner:

Okay. Well that's a reassuring, that's Sure. It's reassuring. And then, well, so then what about military? So the military is, it's impossible to ignore this element as well. Mm-hmm., uh, there is already a deep fate of, uh, Zelensky asking, uh, his Ukrainian soldiers to surrender, to surrender and lay down their weapons. And, uh, and, and he was able to get out of this. He was able to, you know, go on his own network, in his own telegram and, and straighten that out, but to, to what degree it affected them in the end, in that one day or, or so that it was up. I, I'm not sure, but you can see that this is already a problem. The fake news and all of this is, it's a matter of warfare already. Um, and also, you know, foreign countries affect trying to in affect elections in, in other countries. It started with smaller countries. Actually, at first it was even part of British military operations, in fact, to affect elections. And then it was essentially turned on our, um, on our own electorates, um mm-hmm. This is the, there's a film about this. Um, what is it?, um, now I can't think of it. Maybe it's called the net, the social Network Dilemma or something. It's a film about what happened in c uh, Cambridge at, yeah, yeah. The, the Cambridge Analytica movie. The, the movie where that, that was speaking about the Cambridge case. Basically, it was explaining that a lot of their methodology and a lot of their tricks had come out of the British army. Uh, military complex. So when it was, was first turned against Yeah. Small nations that the British people wanted to have certain electoral outcomes, or the British, not the British people, but the British Army basically wanted to have certain electoral outcomes that, that was used against those smaller companies. Yeah. And then it was turned against, potentially was applied at. In Brexit, there is not, you know, the information about this, whether that is true or not, it's not clear, but certainly it has been applied in the election of Donald Trump and it probably put him on top of it, uh, over the top of it. So it's not, you know, people still talk about, oh, it didn't happen and nobody influenced the elections and so on. But there is evidence that Cambridge Analytica potentially with Russian money, who knows, have in fact sway. Um, election, there's some evidence to, to that effect. And they certainly felt like they did it. So there's, there's evidence to that where they say, well, we won this election. We put him over the top. And then they also did, they did very careful analysis of Facebook data and so on. So now you put this deep fake technology, you know, you, you add that to the mix and you give anyone, so, including, including, say, Internet Farms funded by Russia that has maybe 300 people working there. Ability gives them the ability to create all sorts of American characters, particular potentially people who are actually in America living. And then they, they make them say all sorts of things, right? So it it, it could become this sort of mass warfare, this, this, this cyber warfare. Is, is certainly a big application here. Right. And, and that's, that's also frightening because again, each site can do the same. But what we're doing is we're basically destroying each other's societies. Uh, their, their political systems, their even their. The division in America, the polarization can be seen as an effect of previous generations of this kind of technology where, where deep fakes were not available yet. But if you add this deep, fake dimension to it and you make it very easy there, is that right

Rajendra Shroff:

Yeah. So information warfare has been around forever. It's been involving, there's been new tools being added to that toolkit. Deep fake tech could potentially just be the latest tool to that toolkit. Now, is there going to be pushback against this? Are people gonna say, we need to do something about this, as opposed to Cambridge Analytica, the Cambridge analytic episode. Now, because videos are so relatable, everybody trusts tends to trust what they see. One way to look at this is if there's a high profile, deep fake video that eventually turns out to be wrong, that erosion of trust, that generation of anger, cause nobody likes to be duped, that might create enough of a blowback, especially to, for the platforms, just to get better at identifying and removing this kind of content. Now when the public found out about Cambridge Analytica, first of all, most of them didn't understand it. It's very hard to think about algorithms, data collection, data form me. It's, it's complicated, it's abstract. So the visceral reaction of, Hey, I've been scammed. My data has been stolen, it's been used against me, I've been made a tool, is perhaps wasn't there to the extent. instead. Cambridge Analytica had used deep fake videos. People were even angry saying, how dare you? You are. You're messing with my brain, you're messing with my mind. And if there's enough of a public blowback, lawmakers eventually have to say, Hey, we gotta do something. Tech platforms will say, Hey, we gotta do something. Cuz nobody wants to be in the position where Facebook was with Cambridge Analytica multiplied by 10,000. Cuz now everybody's angry. So maybe just the fact that videos are so convincing, Dix are so convincing, could eventually lead to the, hopefully a limitation of how it can be used in this kind of information war from, again, here's me being optimistic. You're probably the more, uh, critical., uh, side of these things. But that's kind of how I think about this.

Johannes Castner:

On on this side? On this, yeah. On this area. Um, little bit. And then, you know, there's also this other thing that, that we should also talk about that's directly related to what you just said, which is that what about the case where something is actually real, some politician got caught doing something. And now they can say they can, they can, you know, put this doubt on it by saying, no, that's a fake, that's a deep fake. Yeah. And so now what about that side? And now Google has to take this video down because it could be a deep fake and it's very hard to detect. And what about the Yeah, as, as they get better and so on, right? So there, there's this element where, what about that? Rajendra Shroff: So that's when you get have basically liars getting away with it because they can just say, Hey, it's fake. How do you prove that it's actually not fake? And by the time you have that conversation and the analysis, in people's minds. They already have the sound bite that says, Hey, my favorite guy says, that's a, so what does it do for public trust? Yeah, exactly. These are questions that have not been answered. Yes. And people, and people that have been caught in compromising situations, let's say with audio clips, their first responses to say that audio is fake. Somebody posts something really bad on social media, their first reaction publicly is say, Hey, my account was hacked. That wasn't me. I would never do something like that. Um, when it comes to defects and they can just point to that elephant in the room saying, Hey, it was that tech, not me. What does this do for public trust? So these are questions we have not really looked at. We really have to. No, we do, we do have to look at that. Uh, it's, it's critical and we have to look at it fast. And, and I don't even see, I think there was, there was one YouTube video and also in the description, we'll be linked if I can find it again. Uh, that was discussing this exact topic that we were just talking about. But as other than that, other than the few conversations I've seen on this topic, there is, is rather lacking. Yep. You know, so there isn't a whole lot of conversation about this topic, and it would be great. For everyone who's watching this, for everyone who can help get a conversation on this going, bring in legal scholars, bring in and, and you know, I'm happy to talk to everyone as well, so just so that you know that I'm out here trying to. Produce and help with this conversation. But I think it's a critical information. These things, I mean, we, we speak all day along about particular politicians. You know, you can find, you can find enormous, you can find troves of, of videos on particular politicians doing something this or the other. But you cannot see very many discussions on these incredible tech technological changes that we are in the midst of. Mm-hmm., that is changing everything in the entire equation. And so I think we need to start talking about it. Rajendra Shroff: Absolutely. And you said, I guess that's what podcasts. Podcasts are a great way to start awareness. And I personally believe that building up this kind of awareness, University education has a lot to play, has a part to play in this. So whenever I teach, we talk about these topics even in a, in a business focus course. Mm-hmm., we talk about AI's effect on society and as people experience these, this tech for themselves, and if they've also learned about it in a structured environment, in an academic environment, hopefully as they go out into the world and make business decisions, or in some cases policy decision, that mental framework is already in place. So I'm quite hopeful that education is gonna be our way to creating more of a utopia than a true dystopia. Yeah. Yeah. Well, la lastly, I mean, before we end this, maybe we, we had a conversation earlier offline about this, that you were, um, I, I, I don't remember the details. Could, could you explain, uh, could you tell us about that?

Rajendra Shroff:

Yeah, so we're talking about, um, we're thinking, oh, deep fakes are gonna ruin human relationships. Um, but if you look at just the world of cartoons, um, they predicted this in the early two thousands, so not sure if the people listening have heard of a TV show called Futurama. So it's a futuristic cartoon set in the 22nd century. The cartoon was created by the same creators behind The Simpsons, and an episode they put out in 2001 was how one of the characters in the show fell in love with the digital version of Lucy Lou, the actress, who survived 800 years as a robot. Um, and started a relationship with, uh, the character whose name is Fry. and the entire episode was saying, oh, everybody else was disgusted by human robot relations. It doesn't lead to children; it's terrible. And they're going through like a similar conversation that, uh, many people today are having saying, Hey, if people can exist online, only digital worlds only, they'll completely forget about the physical world. What does that mean for society? So even cartoons, um mm-hmm. probably one of the purest art form art forms out there. Controversial view have predicted this foreign. And once you rewatch that, you're like, oh my gosh.

Johannes Castner:

Yeah. Interesting. And I, I also told you about, uh, a movie that, that I saw when. In my teenage years called, uh, ma, uh, I, it's called 20 Minutes Into the Future. Mm-hmm., I think that's the name. Yep. Um, and it, it it's about this character that then becomes, uh, max Headroom. Mm-hmm. a, a, a really fantastic creation of, of an independent filmmakers in London. Uh, a very interesting, I I, I will, I will put links into the description. I think if you're interested in what we just talked about. Oh yeah. This will be interesting to you. It has to do with a journalist, again, a journalist getting at, uh, at, at the truth of something really evil, that it's, uh, his, his own, um, organization is doing his own broadcasting corporation, which is an American company doing something in London. Uh, I, uh, I don't, I don't remember the exact context, but it's, it's fantastic. So they tried to kill him, but they already had a digital twin of him so that his show could keep going. He has this as a journalist. And they could control him essentially. But then he survives the attack and now there are two versions of him and one of them got also kidnapped by completely rogue punk rockers. A very cool movie. You, you might enjoy it. It's related to this topic.. Just wanted to mention it. Next time on. I'm definitely loading up. This is a topic. Great, great. I would recommend it if, if they have it on, on, um, more, most airlines, they're probably not quite, it's true, the Austrian Broadcasting Corporation, they, they were the ones who, who gave me this. Little goody. Um, but it's an English film. So, uh, yeah, to close this out, I guess this is one of those topics that we'll probably, hopefully will revisit it in another episode in the future, and maybe we will even have a panel discussion on it with few different views on it. This is my dream. I hope this will happen. Um, but at at the moment, I think we, we are running into the territory where, where we should probably end the show. Because, um, because it's sort of an hour's worth of commute for you generally. That's how I think of it. And now we've gone quite a bit over and yeah, it has been a fantastic, fascinating discussion. So I, I hope it's the beginning of one and, and not the end of it. So thank you very much, Raj.

Rajendra Shroff:

Oh, thank you. This was a lot of fun. So hopefully people listen and just start thinking about what's coming.

Johannes Castner:

Yeah. So now I wanna say, which I always do to all, all of, um, I guess I wanna give you an opportunity to say anything you want to the audience and then also maybe how they can stay in touch with you and how they can learn more from you and what you're doing and what you're up to, um, and, and so on.

Rajendra Shroff:

Yeah, sure. So what I tell to, um, my clients and also to my students in classes, that at the end of the day, AI is just a tool to achieve an objective. What that objective is, is what you define. You can use it for great good, or you can use it for mischief. You can use it just to maximize profit or you can use it to build tools that really help people live better lives. So as, as anybody that's washing this wants to invest in companies or start companies that build out this potential digital future.. Um, hopefully they keep this in mind and do something that makes not only their lives, but also future lives better. Now, I actually don't use social media, so not really alive online, but I do have a blog on Medium. You can find me at raj.r.shroff I post every now and then, so if you wanna follow me there, would love to have you along.

Johannes Castner:

This show is published every Wednesday at 5:00 AM in New York City and Washington dc 2:00 AM in Los Angeles, 10:00 AM in London, in the United Kingdom. It is published on YouTube in video format, in an audio format on Spotify, Google Podcasts, Amazon music, and many, many more, including your favorite place, wherever it is you get your podcast. Please let us know what you like and what you don't like on the show. If there's something you enjoy, please give us a thumbs up. If there's something you don't enjoy so much, please give us a thumbs down together with a message in the common section as to why. Next week I will be meeting with Richard Thickpenny, who will present to us his own unique digital twinning solution to help refugees have more meaningful lives and careers in Europe and in the United Kingdom.

Richard Thickpenny:

What the digital twin approach will be in the context I'm using it is it's actually a new way of, of actually engaging with refugees and the knowledge that they have.