Utopias, Dystopias and Today's Technology

AI, ESG and Startups

July 26, 2023 Johannes Castner Season 2
Utopias, Dystopias and Today's Technology
AI, ESG and Startups
Show Notes Transcript

Join us in this episode of "Utopias, Dystopias, and Today's Technology" as host Johannes sits down with Christian Lazopoulos, a seasoned expert in the FinTech and ESG sectors. We step into the world of AI, exploring its rise, potential, and the ethical implications that come with it.

Is AI on the brink of self-awareness? What does this mean for the rights of AI entities? How is AI reshaping the startup landscape and navigating the ecosystem? We tackle these questions and more, offering a comprehensive look at the intersection of AI, ethics, and consciousness.

But that's not all. We also discuss the role of technology in shaping our future, the challenges startups face, and the potential of AI in connecting the dots in various fields, including medicine. We explore the influence of AI on human behavior and its role in democratization.

This episode is a must-watch for anyone interested in AI, startups, and the future of technology. Tune in, gain insights, and join the conversation. Don't forget to like, share, and subscribe for more deep dives into the world of technology.


Johannes Castner:

Hello and welcome. My name is Johannes and I'm the host of this show. Today I'm here with Christian Lazopoulos and we will be talking about AI and specifically generative AI and ethics as well as practical questions around his practice. Christian is a veteran in the realms of marketing and innovation. He has rich experiences with startups in the FinTech and ESG sectors, highlighting the unique strategies employed and lessons learned. He has also served as the head of innovation at explain where he unveiled a proprietary AI and big data powered solution for brands that invest in purposeful marketing. Additionally, he has been a member of the Sensitium Institute Center for Systematic Diagnosis and Marketing based in Antwerp, Belgium. And the University of Ghent, where he gained great experience as well as training in the field of consumer psychology. So this is quite related to my own job, actually, as well, um, because I recently joined the Kingston Business School in the area of, uh, you know, uh, organizational psychology and behavioral sciences. So there, there is something to talk about there as well. But let's start with, you know, the ESG and the... Um, ethics part, um, you know, can you tell me a bit about that? I've read that you also have a new ethics pamphlet that is going to be coming out. Can you tell us a little bit about your involvement there?

Christian Lazopoulos:

Yeah, sure. Thanks for having me. First of all, I'm sure it's going to be very pleasurable and I'll learn a lot. From this discussion myself. So, um, AI is not something new. It just, uh, it has risen to, uh, the, uh, mass awareness of everyone, uh, in the last year, year and a half, probably because of the advent of very easy to use and open to the public, uh, solutions and platforms such as chat GPT, or if you want to go to the. Visual generative AI, things like Dali or, um, mid journey, et cetera. Um, however, you know, uh, this is not a new discussion by any means, especially, uh, when it comes to people that are, uh, at the core of specific disciplines, whether it's been, uh, futurists or, uh, future technologists, uh, you know, depending on the decade, or even depending on the century, the name changes a bit, but it's also, it's always about people. But usually have some sort of a scientific background, and they've been tasked to think ahead. Um, and AI is not a new concept. It's been around since probably ancient times, because there are actual myths about automatons and robots. And me being Greek, of course, you know, I have to have some of these, uh, you know, examples, like Talos, the, uh, the robot, the AI driven robot. I used to do, uh, the high, you know, guard the island of Crete, uh, and by running around it twice a day in a day or so. And, uh, and it's not just the Greek mythology. It's, it's always been a human, if you'd like, um, um, major dilemma as well as major, if you like ambition to see whether we ourselves, uh, can be creators in general. So there's one way right now being creators. And we had all of these, uh, Discussions and still are having discussions on a biological level and a I just a dimension of that same if you'd like drive that we have a thing as a species, but on an artificial so, um, and you can see, uh, this throughout popular or even apocryphal science, but also science fiction. You know, it started with robots, uh, and then because of the cyber, uh, element of, as we grew, uh, into, uh, from the mid seventies onwards, with the internet becoming also something that is part of everyone's life, we went into artificial intelligence in terms of a digital entity, not necessarily having a physical body, synthetic or not. So it's, it's, uh, um, it's always at the core of all these discussions, there's always philosophical matters. And that's where ethics come into play, because ethics is basically, uh, applied thinking as to what should be right and what should be wrong based on whatever the context at the time is, uh, sometimes on a universal level, but sometimes based on different cultures. However, in this case, this being an international thing, you know, it's AI is not, um, Uh, geographically contained in any way or, uh, uh, reason. So ethics is going to be at the center of the discussion, I believe, for, uh, the few still years before we see even more of, uh, adoption of, of, of it as a solution. Should we, shouldn't we, how far, um, why, why not? These are all questions that we've been really, uh, discussing about it, but now it's becoming more tangible because much, many more people, as I said, are thinking and, um, are either optimistic, pessimistic, or wary, or a combination of all. So that's why ethics is becoming, I think, uh, Uh, core, uh, you know, element for the discussion for that, at least for the next decade.

Johannes Castner:

What is your stance with respect to ethics? Do you have a particular school of thought that you advocate, uh, such as, you know, for example, human rights or utilitarian approach or?

Christian Lazopoulos:

Well, I think, um, I think that it's a good question because, uh, One easy way of not having an opinion is to still look into things and still do a lot of research, but I, I, I believe that once myself, I understand a bit more for myself, for my own understanding, what an AI entity. Would be like meaning someone, something that we would give rights to because it would be self aware. I think then I'll be a not human rights person will be a universal life rights person because in the end of the day, that's where things are leading to. I believe at some point we'll have that. Uh, huge shift of paradigm where, uh, an AI would, uh, an AI entity would be very hard or almost impossible to discern versus a human person when it comes to thinking, self awareness, um, and, you know, all the things that. We, um, hold as, uh, elements and as, um, if you'd like, um, uh, filters of what it is to, to be self conscious and be self aware. Um, we, if you, if you think about it, we give, uh, more and more, uh, rights, quote unquote, to, uh, the... Um, the closer animals are to us, which is a bit biased going back to ethics, but still the closer they are to us, the more rights we give them. Uh, but, um, what does, you know, if that happens, why shouldn't it happen with an artificial intelligence? Uh, once it became self aware, how close are we to it? Uh, I think we're closer than most people believe. Um, I think we're, as I said, 8 to 10 years away, if you believe, uh, most of, uh, the things you read about, uh, you know, the, uh, major, uh, jumps ahead. But, uh, still, it's, it's still some time to debate and put some context and a framework around it, but it's not that far away. So as to be very, you know, relaxed about it either,

Johannes Castner:

but for this question, it really does, uh, require to open up the bigger can of worms of consciousness again. I am afraid, which is something I've come to on the show quite a bit. Um, and it is, you know, is it, is it that we understand, you know, if we don't understand how it is produced in the human brain, which would be the critique of John Searle. And, and, you know, I, I can see that point and I actually, I feel quite, I, I, I feel, I feel like I, I very much agree with John . Searle's point of view that if we don't understand how it works, you know, and I don't think that it's similar to thinking, I think that consciousness is a different dimension. So this is, this is this strange thing because simply, simply because the intuition that I come to is simply, you know, Because if you ask someone who has meditated quite a lot, um, you know, for example, a Buddhist monk, which is the usual stereotype, almost, um, you know, they will tell you that you become more conscious or that you gain more consciousness, the less you think, right? So the, the more you turn off the monkey mind, as they say, which is, I think what we have implemented at this point, we have implemented in a, uh, to a remarkable degree. the monkey mind, right? Sort of, you can chat. It can chat. It's even called chatbots or chat GPT, right? So it can, it can produce utterances that appear conscious, but it, you know, when it says, uh, you know, it might say something like, uh, you know, this is an example that I've always come to. Um, I can feel the wind in my hair or something. It cannot really feel any wind in any hair. It doesn't even have any hair, right? And it isn't in a way it cannot feel. Right. Um, or at least that's my understanding. So this idea of does it make sense to ask and, but, but I've also been shaken in this because I, it is not a hundred percent clear to myself either. If it couldn't be possible that something like a robot could become conscious on its own somehow. Um, I, I find it unlikely because if we don't understand it, I think, you know, when we, when, when it comes to intelligence, we have to build it, you know, we have to. Build it from the ground up, and we have to do a lot of experimentation and so on. Why wouldn't that also be true of consciousness, right? And if we don't understand anything, we have no, really not much of a slightest idea of it. We, we, we have some really interesting speculative theories as to how. This person gets into that head. In other words, you know, as in, you know, why is there a Christian who feels some experiences or can can go through the world and experience the world right in real ways? It's it's not just that you have vision. Implemented, you know, that, that you can tell the difference between a cow and a camel or something like this is much more than that, right? This consciousness. What is your view on this? Is this necessary for this sort of ethical personhood? I think it is right.

Christian Lazopoulos:

It isn't. I mean, it is to deal with it and keep asking yourself. But, uh, for example, um, if you take a sensorial of approach to consciousness and are connected to consciousness, like Feeling the wind in my hair, for example, right there are people that suffer from conditions where they cannot feel anything. Do they not have a consciousness and they weren't born this way? I'm not saying someone who lost it, but experienced it. Do we believe they're not, um, you know, human beings with a consciousness? Um, saying that be a mindful also, because, you know, with psychology being both of our backgrounds, et cetera, there, uh, we, we. We do have people that, um, are detached from feelings, right? And we don't believe that they are, uh, normal human beings in the sense of they cannot have empathy, for example. They, you know, they, they do not recognize right from wrong, and this is what we consider an illness and an aberration. But would you, um, Uh, take a very small percentage of, you know, the population and separate it and, uh, use it as a, as a paradigm, uh, in the sense of trying to understand a new way of being, because it's not a separate or different, it's just a new way of being in my head. Uh, and of course, uh, when it comes to the original point you made about, we don't understand how our brain works. We don't really. I mean, Uh, honestly, we don't even know we don't, we don't know anything really. If you think about all the stuff we don't know, we just not even scratching the surface. We're at the very micro atomic scratch of, of things. And I'm talking about even the basic stuff. So we know more about, um, the space between, uh, the earth and, uh, the planets versus our oceans. For example, this is, I mean, if you, if you think about it, it's pretty basic. Why would you know yet still about the physical world that we're living? Don't actually completely explore it down to the last, uh, you know, nook and cranny. And then you can think about, you know, how and why brain works. We just, uh, we just now start to understand the quantum aspects and elements of our thinking when it comes to our brains. But the more we go into it, still, we're at the level that more questions are being generated rather than answers being, uh, you know, given and, and, and. And provided. So for me, I'm, I'm of, maybe it's more practical or empirical, as you mentioned, if some, something works to the level where you can feel yourself or interact with it and feel the same way as you would interact with someone that you know is a person. What's the difference?

Johannes Castner:

Well, I mean, in terms of the rights, right? So if you, if you give something a right, it must enjoy that. Right. Right. I mean, it's, uh, you know, why would you give something a right? If it has, if it's indifferent to it, in other words, You know, I feel that at this moment machines, you know, a hammer or something like this doesn't have much of a right, right? It is a, it's a matter of serving our needs and it is built for our needs. But now if we are starting to think, okay, um, you know, these, these, uh, you know, I think this, this is what it matters for ethics. In a way, that's why I, you know, and this is a question I have actually for you. Does it matter for ethics? Is it not driven by that question? Right? You know, why does a dog have rights now? Well, because it has feelings, it experiences the world. It can suffer. It can, you know, be happy or it can suffer. It can, can, you know, enjoy the world. It enjoys these rights that it may have. And you see, but, but if, if, if an algorithm doesn't really enjoy rights. Why should we give it to it? It seems kind of contrived in that sense. But if it's real, if it is conscious, then yes, of course,

Christian Lazopoulos:

we're not there. I'm not saying that at the moment, the whatever has been achieved. In terms of, uh, you know, uh, neural network, machine learning level, I don't think anyone has achieved what you mentioned, you know, like self awareness or the ability to feel whatever the feeling might be at the moment. It's mimicry. I'm not disputing.

Johannes Castner:

Yes. Yes. Yes. But how do you know the difference? And when do you. Yeah. When do you know you have crossed the line from mimicry to, you see, this is, this is actually a tricky question, right?

Christian Lazopoulos:

It is. Uh, but, um, and as I said, it's, it's not even applying the same tests that you would, uh, undergo with a human person because Uh, you would have to devise a whole new set of tests and, uh, and questions and, and, you know, experimentation in order to make sure that this is beyond mimicry after a while. However, saying that if you, if you make the analogy of how we came to be, you know, if we believe in the. Evolution, the way we think, we understand it like a monocellular to this complicated, you know, thing that we call a body and actually that quantum mechanism that is made over spongy meaty stuff in our brain. Why could you, why couldn't you achieve that with electrons? Why couldn't you achieve that with just that element of whatever it is that makes up for it? The, um, you know, the algorithms and the mechanics of a neural network. I think it's,

Johannes Castner:

I'm not doubting that it couldn't be done, but I don't think that we can do it if we don't understand the mechanism that, so, so I'm not even saying, you know. When you were saying, you know, we don't understand the human brain and how it works, but it's even beyond, it's, it's, it's not, it's, it's more specific than that is what I'm saying is we don't understand how this inner movie or whatever we call consciousness is experience of existing as a continuous entity and so on. How this particular thing is produced. I think once we understand how that is produced, I am almost certain that we will be able to build it somehow. Right. But I think that that is the lack that the reason we can't do it is because we don't understand how it works. Maybe.

Christian Lazopoulos:

Yes. As long as you want to build something that's completely the same as our brain, it could be that the mechanics and the mechanisms are completely different as to how our brain works. But still, you know, again, et cetera, and I'm afraid that's I know it's it's an easy way out of saying anything is possible because, in essence, that's what I'm saying. Now, it doesn't have to replicate a brain to be necessarily conscious. Uh, all I'm saying is, um, the only way that we have is as a human, as a human beings as a species right now to to potentially understand that there's consciousness or not. Unfortunately, it's through our own understanding of what that means. So, for example, um, you mentioned dogs. Why dogs have rights? It's not just because they feel, it's because we weren't mature enough to give them rights in the last few years compared to all the history we have with them. Uh, you know, I, uh, and again, there's still a lot of places around the world where dogs are just tools. Equally, there's no sentiment about them versus a hammer. Uh, and they treat them in a way that we feel is completely, you know, bad or without empathy, et cetera. It's a matter of our maturization as well, uh, in, in potentially broadening our scope of, uh, understanding about, uh, the physical world and beyond. Um, so, uh, I'm afraid I have more questions than answers still today. All I'm saying is, uh, and again, with, with, uh, in general, not just with AI, I'm an optimist. I, I truly believe that if we have a future that's going to go beyond the next 60, 70 years because of all the problems that we have on the planet right now. And you know, these at least most of them are, uh, you know, proven. Um, technology is going to, uh, to help us resolve them rather than, uh, anything, anything else. Because... We don't have the time to change our behavior globally speaking, you know, on a big social level. So the only thing it's going to, that gonna help us. Maintain any possibility and any chance for the future, I think, is a great catalyst. And the biggest catalyst we have throughout our history has been technology.

Johannes Castner:

I agree with that. I think that the human thinking is also necessary, though, and it's specifically with respect to what, what it is we want, right? So we have to shift. So I recently had a conversation about exactly this. And I think what we have to do is we have to shift from. Our understanding of how to achieve something, because now we get the eye to help us to do that, right? So, for example, gradient descent or linear algebra, you know, we don't have to think too much about that at the moment anymore, because, uh, it, it becomes. As they call it democratize. I'm not sure that this is a good term, but it's something that's being used, meaning that it's becoming simpler to operate and to build things. So with this simplicity, we shouldn't stop thinking because now we have the simplicity, we can easily implement a bunch of things. But now what we need to do, I think, is to, to allocate or reallocate our thinking to the puzzles that are of this ethical nature of what is it that we should be building and what should it do and then how should this output look like and how do we detect whether the output is actually what we want or if it is something else and then quickly change course if we learn. That we're producing something that we don't want to produce. And, and, you know, anyway, I think, you know, further democratize it in a way that more people have a, a say into the algorithms that we're exposed to as well. So right now, Facebook, for example, or Google, they're dominating, you know, that there's a few hundreds, maybe thousands of engineers working on these algorithms that are exposing us to various things. Stimuli, if you will, but instead we should actually all work on this, right? So we, and I think this is what democratization is also about, right? Is that this easy, this easier access, the easier ability to produce, um, results in AI or to use AI for your own purposes. Now. Gives us also this responsibility to actually, um, use it in wise ways and think about what could go wrong. Right? And if we, if we do shift this alongside with technology, I think if our consciousness as a, you know, uh, being conscious of how we're being affected and then how we should be affected and so on, if that grows. Along with our technology, I think we're actually in really good shape, but I think it doesn't, it doesn't guarantee that because we build very fast, we build technology in a very fast way that it will save us. Right? I think there is this interaction that is required.

Christian Lazopoulos:

Yes, um, being being practical on that. However, I would say that in order for this to happen so everyone can catch up and have a say and be a shaping factor, we'll have to really delay at the moment. How fast things are going, which is not necessarily a bad thing. I'm just saying how practical is it? Because most of most of The obvious, um, progress has been done by the private sector and the private sector, uh, is very hard to take a breather and stop because they feel there's always someone that might be get ahead of them might get first to some sort of a new channel of revenue or some new application that's gonna, you know, increase someone's share or anything like that. Um, if we could achieve that level off, um, you know, understanding and collaboration basically across everyone who's now a player in this to take a breather and start, uh, um, having a more rational, more, um, more, uh, efficient way of including everyone in the conversation. Yeah. That would be ideal. That would be ideal. But I historically that's never happened. And even when technology was not going as far as fast as it's going today, and maybe it should because it's going now with, you know, every day it accelerates. But, um, if you asked me how I would go about it, I would probably have to have a thing about a month or so to even start giving you some ideas as to how I would do that. How I would get all the big players to, you know, sit on the same table and, and agree on things with the representation of everyone else that is not. The private sector, like, you know, people, you know, and there's always going to be someone, even if we, you know, 90% agree to do, it's always going to be a 10% that they're going to do, go off on their own tangent, right? Uh, because that's historically, that's what's been happening. If you think about it with any controversial, um, sort of, uh, Technological progress, whether it's been cloning,

Johannes Castner:

I would say it's going a little bit back and forth as well. Right? So, because, for example, the Internet, when the Internet was in, uh, first developed, the purpose of it was to bring us. I think that truly the purpose was an idealist one that was meant to bring us all together to make us think more collectively, to connect with each other and so on. Right? And to be exposed also to different ways of thinking.

Christian Lazopoulos:

Not the internet, right?

Johannes Castner:

Yeah.

Christian Lazopoulos:

Because the original...

Johannes Castner:

Yeah, yeah, yeah. You're right.

Christian Lazopoulos:

It was a military anti nuclear, uh, sort of veil proof system of communicating. So that's what I mean,

Johannes Castner:

you're right, you're right. The World Wide Web, the World Wide Web, you know, Sir Berners Lee and others, you know, have started working on this early on. I think their purpose was to really, you know, democratize basically access to information and so on. And I think, you know, then clearly it was taken over, it was sort of, uh, and it, it was also sponsored by all of us in a way, because we, uh, we were, it was paid for mostly by tax money, and so in a way it was done, you know, you could say for the people, by the people, and then it was hijacked, you know, there's a whole thing about that, you know, there's a whole history as to how it was in a way hijacked by, But, but, um, I don't want to get too off the track, but I think, you know, also on, on this note. Do you think that, um, startups and, and, you know, you, you've been active in the startup space and in the ESG startup space, you know, how can they play a role to perhaps make this whole system a bit more democratic and more, you know, oriented toward consumers and users and. People who are affected by it.

Christian Lazopoulos:

Well, I think it's, uh, what they're doing already. The problem again is the ecosystem of startups at the moment. So going from the, you know, starting, um, from your original question, um, startups again, it's not anything new, but in the last, um, five years, and especially COVID. Most startups or a much larger proportion of startups are ESG orientated. There's a whole new generation of people that, um, if you think about, you know, I'm talking about mostly, I'm sorry it has to be like that, but that's how it is. It's mostly westernized. Thinking, but not necessarily geographically speaking, but the whole process and approach of these things, there's a full bunch of people that are around 30 years and younger, old, um, generalizing just for the economy of the conversation that, um, have, uh, much more aware that we need to do better, uh, right. So there's a lot more startups that are ESG, uh, orientated. And when I say ESG, however, it's mostly. E, then S and a lot less G for my experience. So it's more.

Johannes Castner:

And could you break that down a bit more? What? What are these letters stand for? What do they stand for?

Christian Lazopoulos:

And so E being environment related, right? So there's a lot more startups that are trying to find ways to help. With, uh, reversing or dealing or, you know, uh, finding, uh, ways of, uh, um, doing, uh, reclaiming nature, uh, after all the, uh, damage we've done, or, uh, finding ways to buy as time even. Right. Uh, that's a big focus for the last two, three years. Uh, after COVID, there's also a surgeon of the S, which is a society. So how do we help people lead better lives, uh, optimize, um, their well being, um, men, mental health, not just physical health, financial health, but, you know, there's also, uh, a resurgence again, which we haven't seen on a, on a thinking level since the late eighties, early nineties of connecting all these things like physical mental health and financial health is to one well being because all of these affect each other. And there's a lot less to G because, um, and this is my personal experience, right? I think G is the, uh, the least, um, Logically derived sort of common sense because it's very, very specialized. So when you go into governance, which stands for you again, obviously have to find solutions of specific problems. However, these are much more specific than me leading my life or looking at the environment around me and coming up with new ideas as a startup because you have to go into infrastructure. You have to go into Governmental projects and bureaucracy and you have to go into inefficiencies and, you know, things like that. So, uh, with the exception of, for example, smart cities that, uh, do have governance as a de facto building element into them. Uh, not a lot of people go into it because first of all, it's not as attractive to be perfectly honest. It's not a, it doesn't get your creative juices flowing here is, uh, 500 manuals about the general, you know, transportation, public transportation system, find the problems and resolve them. That takes, uh, you know, a lot more effort. However, they are the ones with a higher level usually of success because they are very targeted and you already have someone who immediately is on a seat. If you are successful in your endeavors, the benefits or the ROI or, you know, whatever, whatever KPI you want to use. However, most people are going after the E and the S, because first of all, it's easier for me because I'm living it. Whether it's the environment or the social aspect of it, it's more inspirational for the, you know, the average person that is going to have a startup. And, um, I think it's, it's, uh, it's a better cause in most people's minds. I mean, we, we all, we, we need all three, don't get me wrong, that's why we have ESG and not something else, but from my experience, E and S and S are at the moment more in vogue. Right? Saying that, um, in most cases, uh, now the way things have been going, if you, and I'm sure you've, you've, you've seen it yourself going through the rounds of, uh, securing, uh, You know, financing and financial support and going into things like that, it has become, I'm afraid, uh, sort of, um, a thing on its own. It's, it's, it's, um, almost, um, becoming too specific to formulaic. That's the word I was, uh, uh, looking for. So I've seen, um, I've seen a lot of startups, uh, succeeding in moving forward because of the way they were presenting the idea rather than of the idea itself. You know what I mean? It, it, it, and I, like anything that has, uh, you know, becomes more mass. And populated, you know, things, uh, need to, you know, to make them easier for people that are involved. Going into structures, going into patterns, you know, going into mannerisms even. And, um, a lot of the funds and a lot of the, uh, a lot of the investors that. I've been, uh, I've experienced, uh, unless they have been in the game for a long time, which becomes a smaller percentage because a lot more people are going into investing in startups over the last few years that are investors in other things as well, not just technology. They, it's easier to, uh, to convince them, uh, with, um, as I said, the way or the lingo you're going to use, the, the, the, how glossy your presentation is going to be versus the actual feasibility or the impact potentially that the startup is going to have. That's one major aspect. And the other is with so many startups. Now, we've been discussing for a while. I don't know how many minutes already. I'm sure 20 startups have just, uh, just got, uh, you know, uh, uh, organized and came to be, um, there's a lot of overlap. There's a lot of overlap, which is very frustrating. And it's very hard to, to figure out. So a startup or two might overlap with another 20, not in its entirety, but some of the things one will cover, two more will have some elements of their own, et cetera, et cetera. And so, um, that's why I keep saying governance is, is easier as long as you're willing to, to, to do the boring. Uh, you know, uh, research, uh, first, um, and if this continues, um, you know, a lot of already a lot of people think that this is a sort of an industry now in the startup industry, which is crazy. It's a it shouldn't be something like that because it's conflicting, uh. Uh, you know, conflicting worlds, uh, but a lot of people think already that, uh, startups is an industry and, uh, um, you see more and more of the same people winning, uh, going to the next round. And not necessarily the ones that would have a better impact or more impact to, to things.

Johannes Castner:

I'm, I'm wondering if there isn't somewhat of a technological fix to it as well to create. You know? Yeah, exactly. Exactly. Sort of a search engine through the space of potential products and services, right? And where you can fill the whole, basically, if you have an idea, you should be able to to situate this idea in some space. We can even visualize it and say, okay, here I am in this space. And, you know, there are some things to the left and to the right and in front of and behind me, but I am actually filling this hole that You know, isn't really quite occupied in this way or something like this. There should be some, some tools that allow you to navigate this in an intelligent way. Um, even in a visual way, I think. Is there anything like that at all?

Christian Lazopoulos:

Uh, not organized yet, uh, because, um, the thing is... Everything that is surfaced on a level that you or I would go online and look for it and find it, it, um, sort of, uh, doesn't negate the issue about the industry aspect of things. It actually, uh, aggravates it. It should be that these tools should be so specific that they would immediately be... That would be B2B rather than B2 everyone. I'm not saying senior, um, in the sense of, you know, here's a serious, uh, investor, um, you know, group or a fund or anything like that. You go to them and say, listen, uh, you, you, you already see a pattern because in the end of the day, everyone is human and they have their own biases. They have their own, uh, thing and patterns of thinking. Here is a pattern of what you've done so far. Uh, This is what has worked or hasn't worked for you. Uh, please use our solution for a more objective, quote unquote, way of, uh, you know, discerning and qualifying, uh, further opportunities. Saying that, and again, go, might go to philosophy again, but it's, it's actually, it's not. The problem is, especially with E that, um, in the end of the day, startups need to be successful because someone is going to be interested in them, right? Because ESG is now, uh, almost becomes a hygiene factor for most corporations, businesses, brands filling the gap. Um, I'm not sure, especially when it comes to EHE, and this is a huge conversation that's been, have been had for years now, that we have the right KPIs as to what is good. So as to then know which startup is the most beneficial or not. So, for example, I'll give you, you know, everybody's talking about carbon offset for ages now, and it's being a major, uh, KPI, right? But, more and more, we see it's not the right KPI, uh, or at least it's not universally as right as people would like. It to be, uh, why? Because it's probably even being mis um, misused, uh, with the, um, uh, way, you know, with um, um, way of, uh, being a huge, uh, corporation that you just buy carbon offset units from, uh, the market, the. Stock market of carbon offsetting, and then you just continue doing what you're doing or delaying greatly, uh, changing your core aspects of why you need carbon offset to begin with, right? Um, so when you have those KPIs, they're not fundamentally right. But startups still use them as to think of what they will offer back to, you know, the planet, the society, et cetera. What's the point? You know, crap in crap out, oversimplifying things again. But, um, so I would love to see startups that would tackle those fundamental issues, right? Let's discover what are true KPIs when it comes to betterment of our situation. Um, on an easy level, um, before,

Johannes Castner:

isn't that a question of science more than, or social science rather than one of startups? I agree with you. We need absolutely good measurements, right? Good, good measurements and good targets, which should be in some way separate things because of, uh, Goddard's law that if you have a target and, uh, you use it as a measure, you know, clearly there will be some incentives to. To, to manipulate it, right? So we have to have kind of both. We have to have some sort of independent measures of something and we have to have some targets and, but those should, shouldn't those be given by social science or by science in general, or maybe by, by a panelist of scientists and social scientists rather than by necessarily startups.

Christian Lazopoulos:

Well, a lot of startups are academic and scientific driven, right? They're not all, you know, private citizens or just graduates. Most actually successful startups or significantly impactful startups come from those arenas and areas. It's a sense of how fast you want things to go because The one thing about startups is they do have a momentum. They do have an acceleration speed because everybody wants to see, you know, results. So if we could have, you know, startups within the scientific disciplines that you need to come up with this KPIs, I'd be happy because we would, uh, Arrive to more significant results faster, because as I said, there is a timer when it comes to me. So, in that sense, yeah, I mean, if we could do it on a purely academic level and a government level, whatever that means, or authority level, because it could be something like, you know, the world again, the UN or things like that. Um, yeah, but unfortunately, you, you know, these are very slow moving organizations.

Johannes Castner:

But how do you make money by creating, say, metrics or KPIs? Right. So the question for a startup generally is. That they have to make money, whereas academic institutions or a world health organization, UN and so on, they don't necessarily have to make money, right? They're, they're getting money through donors or through some other means. They don't generally have to be in the business of selling something, right? So, but if the creation of KPIs or metrics, if you will. Isn't, uh, isn't itself a lucrative enterprise? Is it? Can you make money creating new KPIs?

Christian Lazopoulos:

I, well, yeah, um, and there's two, two elements to, to, uh, to this answer. I believe one is if you take data analytics for any other purpose, like, you know, business purposes, there's a lot of money in that. Of course, this business model is gonna, you know, weaken a bit because of AI, because it's going to get more automated than before, but still, um, being able to measure as accurately as possible and even have predictive analytics and then prescriptive analytics, that's even. Further down the line of the conversation, uh, it is a lot of money to be had. But when it comes to specifically what we discussed that are KPIs that are not necessarily business related, I think it's a matter of connecting the dots, because if you have true KPIs about the environment and about society, you will find the benefits in the end of the day of the economy, because all of these things are inter, you know, completely interrelated. However, the second part of the answer is, um, universities or academia or Even, uh, you know, uh, governmental, uh, organizations, they might not make money, but they buy for money. They do fight over, uh, acquiring more and more, uh, funding, right? And budgets. Uh, and, uh, the, the, the business model there is, okay, we can replace the word business, but there's a funding model that has a completely different. Uh, but parallel to the business model dimension, which is the more money I have, the more research I can get, the more, um, famous my institution, myself, my, my unit, I mean, this, it is however something that requires equally, um, you know, money in the end of the day.

Johannes Castner:

Yeah. But it seems to me that it's about papers, right? Publishing research, being accepted by nature, right? That sort of thing will get you more funding, right? To have a. Publication in nature magazine or in, you know, very prestigious, uh, uh, scientific journals and so on. It seems that that is an output that in itself is justifiable at an academic institution, but maybe not so much with a startup, right? So if a startup publishes papers on what makes for good for a good environmental or social measurements, like ESG management measurements, right? And then these, these measurements then have to be somehow. Also enforced, right? Because right now the things that may that save you money would be to To have less, I guess, net omission or something like that, because then you're avoiding some tax categories or you are basically as a business, you're playing to what the government requires of you, right? Sort of, uh, how to say, um, compliance.

Christian Lazopoulos:

Sure. But the people that make up what is compliant or not, they. Can really make a lot of money if they, it depends on whether you're in it for the money or whether you need it for the fame or both because a lot of people from academia, once they've found something that is actually useful and they had the startup, they had the parallel or they changed completely course in their personal lives or in parallel became millionaires or, you know, one doesn't it. Exclude the other. It's again, what drives you if you're driven by by money or fame or both, you can find you can find it as long as, um, as I said, you, you, you focus also some energy into, um, redesigning a more efficient way of. Uh, you know, keep performance indexes on, on, on all these levels because I don't think we have it every time we make something big and we say this is going to be great for the environment after a while, there are definite signs that actually it's not as straightforward and easy. Biggest example

Johannes Castner:

could be. Okay. So just to be the devil's advocate here is to say, you know, that maybe the problem is the incentive structure, right? Because if you can make a lot of money by designing better metrics or say KPIs, what is better, right? It's a matter of who pays you, right? Anyway, you know, I hate to put it in those ways, Yeah. Yeah. I mean, like, yeah, exactly. But I mean, what does it mean to be better for the environment that in itself is sort of something that has to be measured again, right? So it's like, it's, it's very difficult to say, like, here is a metric I can measure this metric and I claim that it's better for the environment, but that in itself. It's, you know, requires another measure or something else to then

Christian Lazopoulos:

say, again, oversimplifying the question is, um, who will be able to connect all the dots of the total, uh, you know, journey of something being made or, or, uh, being regenerated, regenerated for the environment. So for example, It's not as straightforward as that, because once you, uh, it depends on how deep you go into this whole process. If you take, uh, you know, uh, the, the normal combustion, internal combustion engine versus a, an electrical, uh, one yet emissions, obviously, uh, zero versus even the most, uh, you know, modern. Internal combustion engine today. However, if you see what it took to create the batteries. Uh, the minerals behind the batteries, the workforce, where it was, uh, not having considered the recycling issue 20 years from now, when batteries are going to be more prevalent around things like that. Then you realize that electric vehicles are actually not necessarily that better for the environment. And again, I keep saying AI, but, um, like with other major questions that we still have and the answer might already be there, but we haven't connected all the dots. AI being by default, the best solution as a human kind of so far found in pattern recognition. Really, it might help people, maybe new startups or, you know, in general to say, okay, let's connect all the dots possible now. Let's, let's just start doing a much better, um, you know, job of identifying all of these points in production. In cost effective or non cost effective materials in trial, you know, take all of the factors because that's the problem, the complexity of it, right? And try and understand what, what will be better for the environment beyond not using cars, obviously, even that's a simple, uh.

Johannes Castner:

But there will be some trade offs, right? So ultimately, in most cases, it will not be perfect. You have to make some trade offs. For example, you know, for sure, wind energy is better probably overall than coal, but maybe, you know, nuclear energy could be right. So I, I don't want to upset the Germans, but they, for some reason, ban nuclear all across the board. And then what ends up happening is it falls onto coal, which I think in the short run, in any case, it's worse. In the long run, we don't know. Actually, it could be also worse. Um, but, but, um, but, you know, this, this kind of trade offs also, you know, coal versus wind, wind isn't entirely, uh, you know, perfect either because it does have some footprint, right? Every kind of way to get energy out of the environment will, it will essentially. Um, require some footprint and in effect, it's better if we don't have, if we could cut down on, on the energy usage or overall, right? So, um, so then, you know, like, yeah, you have to do these kind of careful trade offs, you know, and then it's like, you know, how to do them and how we weigh different. Um, so for example, so What do we value more? Some sort of, uh, some amount of, um, co2 output versus the destruction of a coral reef, right? So it's also then we have to, we have to actually ask ourselves, what did, what do we value more on the margins? Right? And I think ultimately we will require a human to make some judgment, right? Because judgment is not something AI can do. AI can only optimize some metric that we already give it. Right? So we, we have to have a metric. Okay. And I feel like that this one ultimately has to be built by humans at this point, I, uh, you know, because it requires ethical thinking, it requires normative thinking, and I think AI is not able to do that.

Christian Lazopoulos:

AI will give you the, uh, I think all the necessary, or potential can give you all the necessary, uh, data or, um, scenarios. In order to make a educated, ethical, unethical, you know, that's a completely different point, but, um, uh, judgment as you said. Yeah. I'm not, uh, waiting for, um, hopefully it won't be that we'll have an AI overlord that will then, uh, just rule less no matter how benevolent or not it will be. But, uh, yeah, I mean, all I'm saying is at least, uh, we'll be able to connect, uh, the dots that so far we haven't been able, and there won't be complete, it's never gonna be 100%. There's no perfect, uh, in the world, but, uh, hopefully it'll be, uh, a huge, uh, you know, um, it'll be a huge, uh, step or jump ahead because, uh, we, we need it. Uh, at the moment, I think. We're trying to fix things based on, um, very imperfect framework. Uh, we need to better that framework. Otherwise, uh, it's very inefficient. And it's, it's a shame. Uh, all this effort, all this money, all this brain power, um, uh, could be even, even, you know, if you take the whole thing, even 10% more efficient would be... Tremendous boom for, uh, for our wellbeing, uh, you know, as a society, as a planet, as everything.

Johannes Castner:

So when you talk about the efficiency, you mean sort of with respect to the environment.

Christian Lazopoulos:

With respect to the environment, with respect to society, with respect to everything. So for another, so in terms of S right for us, uh, imagine startups also that, uh, uh, fall under, um, medical, um, purposes, I truly believe we've. All, uh, there are, uh, all the knowledge, uh, somewhere on around the planet online for a lot of, um, ailments that we can find a cure for. We just haven't connected, uh, and that's again where I also say I will help in that respect. It's the same exact thing as before. But based on medical knowledge, there's all these, uh, there's so much research that is going on. There's so much old research that we forgot about. And we're not taking into consideration. I mean, in the field of medicine and in a lot of fields, there's this thing of really, we discover things because we forgot they were discovered archived somewhere inefficiently in some, it's true. And the more we connect things, the more we're going to know, Oh, actually, by taking this part of that research and those parts from this other dozen research, we can cure, uh, this cancer. And it was all, it was all there, it was just someone didn't connect the dots. And this is where we can have leaps and bounds in the evolution of things, um, just by, you know, taking advantage of what we already have amassed.

Johannes Castner:

So it's kind of relates to the idea of who's going to read all the work we've produced, right? Humans might not be able to read it all and make sense of it. You're a doctor somewhere in New York. You might not know what research has been done in Mumbai or in some other place. And so you're saying I basically has this capability. Of searching through this massive space of let's say, um, PhD dissertations and, uh, work that has been done all over over the world. That's just sitting. Yeah, that's actually a really good point. I find that really interesting. I want to also before, you know, before we go before, um, you know, within this conversation, make sure that I, um, That that I asked you something about the specifics of maybe a few projects that that you did or have done or are doing right now at the Newton's lab laboratory. What is that like? So what sort of work do you do? And could you maybe give us a couple examples of actual practical problems you're working on?

Christian Lazopoulos:

Of course, uh, first of all, you have to understand that the Newtons laboratory is an advertising agency at heart, right? Um, and, uh, that makes it, uh, very interesting in the sense that, first of all, we have a huge spectrum of clients across all categories you can imagine. And that is really, um, great and not so great in the sense that you can go in any... Direction possible. And, uh, my department, we call it business creativity and innovation. Um, is, uh, it has a dual role, obviously to consult clients on innovation. And, uh, I'll define exactly what, how we mean innovation in a second, and to also help the agency itself evolve with and through innovation. Now, innovation, it doesn't necessarily mean technology. It has. Tremendous. A lot to do with technology, but it could be just by applying a new model of thinking by, um, the, you know, identifying inefficiencies and taking them out of the equation. It doesn't have to be always about technology. We use a lot of technology as, uh, tools ourselves for our coming up with ideas. And sometimes for implementation with clients and internally, but it's not, it's not a technology department in that sense. And I keep saying that because it's a tremendous difference, uh, uh, you know, to, to, to, uh, make sure people understand that because. Uh, sometimes when you take, um, just thinking an idea and then you go, yeah, but where's the high tech? where's the AI? It doesn't have to always be there as long as you can make things better or bring in something new in the sense of, you know, that old, uh, golden nugget called added value. Right. And let me give you an example. Um, although the, so we have, um, I'll give you the simplest example. So we have a, um, The national lottery here is a, is a client, right? And, um, the, uh, the thinking that we apply in terms of helping them, which is a very broad organization. It's not just the lottery. It has the full spectrum of, you know, gaming and, And, and gambling, et cetera. Uh, but it's also one of the main contributors to society when it comes to giving back. Cause by law, uh, a big percentage of their income goes back to society, so to charity, to infrastructure, to things like that. Um, so, uh, from, um, on a daily basis, we, we can think of, uh, simple addition to the application. Uh, that promotes loyalty just by including a small timer because they have a lot of, uh, uh, actual physical shops. So by saying, uh, just, uh, because the application is beacon activated, so, you know, the system knows when you're in the shop and when you're not, it can, uh, actually, you know, have a timer and say. Um, again, for the argument sake that every seven minutes that timer is, um, zero, uh, it will give you a small incentive or a small loyalty, uh, bonus, right? So if you're playing for something, it'll give you an extra 50 cents to play, or it'll give you an extra, uh, participation in some of the games, et cetera. How does that help? It helps because, you know, if you were about to leave. You might see that your timer is only a minute and a half before it zeroes, so you will remain within the shop a minute and a half longer. Chances are you might play more. Incrementally, that is a revenue boom for the organization, because even if a small percentage of the people that would otherwise leave during that time frame play just... 0. 10 more, 0. 20 more, it translates into a lot of money annually for the organization.

Johannes Castner:

So you're using kind of a behavioral science or behavior?

Christian Lazopoulos:

It's all behavioral. I should have said something from the start when you mentioned my background, because my background is behavioral sciences and psychology. I don't believe that anything, but let me reverse that. I don't think that there is any project that doesn't have the target audience at the end of the day. Whether it's a consumer, whether it's the client themselves, whether it's general external, there's always a human target audience in the end of the day, that with what you're doing with whichever project you have, whichever idea you have to satisfy. To somehow make their lives better. Uh, is it going to be through more entertainment? Is it going to be through more rewards? Is it going to be through better health? In the end of the day, there's always a human target audience. So for me, it's probably because the way, you know, that was my original starting point and I cannot escape it. But for me, there's always a behavioral aspect to things. Otherwise, I, I, I cannot function. Um, I, I don't know where to start my thinking from.

Johannes Castner:

So are you familiar with, um, with, uh, Shoshana Zuboff's work on, on, uh, you know, the, the whole idea of, um, surveillance capitalism and, you know, the way that I, I guess you could see it as a, as a somewhat of an abuse of behavioral sciences. Or, or notch theory. Right. For example, what Facebook has done, which is destroyed in a way, our, you could say that it destroyed our parts of our consciousness or a part of our, uh, focus where it distracted us and even made us angry. Um, you know, so if you look at the, the, the research that has been leaked, you know, this, this thing called the Facebook files, it's quite interesting, but, um, but what, what, you know, this. This seems to have started actually with Google, so I don't want to give them an easy out either because um, you know, essentially they, they didn't make any money and uh, the investors were knocking at their door and they were saying, you know, what, what are you going to do for money? And they realized that they had all of this behavioral data sitting around people searching things. And so then they. They monetized it and now they're sold it also to other people like, um, other companies, for example, predictive data, right? You know, what are you going to do next? What is going to make you click on whatever ad and so on? And it turns out, I mean, you know, then suddenly, you know, you have lost complete control of your data and of your, of your digital life because of this kind of event, right? So what, what is your, what is your thinking there? And, um, And in terms of ethics, do you address that in your pamphlet as well?

Christian Lazopoulos:

Well, the AI ethics, uh, no, it's much more focused on, uh, um, the specific, uh, you know, uh, technology. But, um, like, you know, with like any tool or any application of any science or discipline that has some science behind it, you can abuse it. Or not, uh, depends on the people behind it. Right. So, um, we, we had, you know, a lot of, uh, people in the medical field that would, you know, for their own understanding and progress in their minds would do atrocities to, to further their, uh, you know, what they thought would benefit others. So it's not necessarily just the technology and data that this is human nature. Um, in that sense, that's why you need a very strict. Uh, legal frameworks, you need all the checks in place, so abuses are not happening, or at least when they happen, to be fast enough to dynamically upgrade, shift, change, whatever, you know, evolve all these, uh, all these, um, systems in place that would protect. People, right? The problem is, um, with data and, um, whether you lose your data or not the control of your data or not, it is, uh, Lee has become legal in the sense of it's regional. So it's very completely different to the U. S. As you are aware, versus the Europe. Um, some might say, however, in Europe, GDP are so strict that it is not, it goes to the other side of not being. Very productive. There isn't one version of the truth in that sense, but there's definitely a whole conversation about your digital self, obviously. When I was, uh, uh, years ago, uh, in Cognizant, the big technology company, we used to, uh, um, call our digital services CodeHalos. So, all of the things that make up for your digital footprint, we used to call that CodeHalo. And we actually were trying to find ways of, first of all, um, Protecting it, but equally finding ways and business models of, uh, a lot of, uh, big companies that wanted to evolve into becoming digital, uh, yellow, um, gatekeepers. So hopefully I'm not, uh, I won't say the name because I'm not sure whether it's classified or not a huge technology company that is, um, that started from and still is heavily involved in telecommunications. Their 50 year plan is to completely abandon telecommunications and become digital self gatekeepers. So, being the ones that will say you're a digital self, all of the, everything that your data, uh, all data about you are protected by us. So, um, and this is a business model that a lot of big companies, including Google, including Meta are looking into because, um, I mean, it makes sense for that because they started the problem to begin with a lot of them, but others that haven't started the problem, they realized how easy it is. And we'll be to completely, um, uh, have people in there, you know, um, um, prisoners of the data. If that's the case, it's happening all over the every day. Whether it's scams or phishing or whether it's data host, uh, being held as hostage, right? And it happens with individuals right now of high worth because that's where the money is in the sense or big organizations. I'm sure you've heard a lot of, uh, examples of, uh, Universities or hospitals that have been locked out of their systems for ransom. And the equivalent of that being done on a personal basis is what everybody is afraid of because once everything is going to be completely online. Because we're still not there yet, but it will be the case, uh, including biometrics, uh, then it's a clear, uh, you know, dangerous to what that might, uh, uh, leave us open to, but I'm sure I know a lot of big companies and a lot of governments are actually looking into how do we. Um, you know, make sure that we're safe from that the way that, uh, the, you know, organizations and, and, um, structures, uh, keep us safe from the physical world as well, whether it's yeah. So it is something that, uh, there's a lot of, a lot of, uh, uh, investment in right now. And of course it also overlaps with things like cyber security, things like that. Uh, it's going to go well, way beyond that,

Johannes Castner:

but it's in some cases, in most cases, I would say in a lot of cases, right? I would say that it isn't really necessarily purposeful, right? So I don't think that Facebook was thinking, you know, let's, let's drive teenage girls to suicide, you know, with our products or let's, um, you know, increase racism or something like this. I think that these, that these, uh, There are these things that happened, like the increasing of racism, for example, at Facebook, it's been documented now that every sixth racist who is on Facebook has been brought to this racist position through Facebook, so they weren't racists before, but they, they are. made to be racist in the process of being on Facebook and, and, and, and that's forced, right? So there's a lot of the different research that shows all of the different problems that happen with Facebook, but it isn't that Facebook was out to do that. They wanted to make money, right? So. They, they, they, this, this model, this business model of, of maximizing attention, uh, you know, so that these, these sort of things could, could inadvertently happen, right? It's not that they're, they're purposefully designed to destroy society, but it sort of happens as a oops, kind of, you know, accident.

Christian Lazopoulos:

It does. You're right. But, um, my counterpoint is it just is more efficient than before. It's, it's keep happening. Um, going back to the romantics, a lot of women committing suicide based on the literature they were, uh, uh, reading much less of an effect, right? Because it was a specific part of the, uh, of the society that had access to it. To that kind of literature that actually could read being women at the time, et cetera, et cetera. Fast forwards, uh, beginning of mid eighties, I believe the perfect body syndrome showing all these perfect blondes, uh, in, uh, uh, advert advertising and they watch and things like that were not even, uh, 5% of the population could, uh, live to that standard that all of a sudden all these teenage girls thought They were the weird ones. They were the defective ones because they were bombarded by all these covers and all these, uh, um, you know, all this content on, on, on, on the television, et cetera. Again, much more mass than the previous example, but, uh, now with the advent of the internet and everyone being connected to everything, the problems that, uh, as you say, I don't think it's their own purpose. They just happen because. We don't, you know, we don't put enough of effort in filtering things and thinking about them because of profit seeking. Um, however, they're much more prevalent now because more and more people and faster, uh, have access to content like that. Specifically with racism, it's a very interesting point, though, because, Facebook and...

Johannes Castner:

But there's also... There's a personalization element to it, right? So it's basically if you start, you know, at first you don't see the racist content. You see something that's a little bit. Maybe that hints in this direction, perhaps in a very subtle way, and it gets increased, you know, so you're getting more and more ever overt racist, uh, content, and then it sort of walks you there slowly walks you into this kind of, yeah, right. And I think it happens because of, um. Like, um, uh, uh, the maximization of, of, of, uh, attention, but also the metric, right? It's interesting that you're saying, you know, the metrics, when we're talking about, you know, KPIs and so on, that actually the wrong metrics, if you're optimizing the wrong target, sometimes it will lead to disasters. That are completely outside of your comprehension and you wouldn't have been able to see so I think that, you know, maximizing this metric of attention leads to more racism because racism actually garners attention because if a racist comes into a bar and says something racist, everybody in the bar will suddenly look at this person,

Christian Lazopoulos:

right? It's more sensationalistic, uh, basically, right? It's like the, uh, the, um. The rags of newspapers of the old, the, the ones that were full of scandals and lies. It's the same thing. You want to, it's addictive because they're very sensationalistic. Um, the, the thing is, we are inherently biased. So racism is, um, inherent biases gone off chart, right? So the algorithm is not, is not prepared because it wasn't well thought of because of the wrong KPIs. It accentuates that. You're interested in things, more so, but little by little, it, dare I say the word, brainwashes you, and you then, uh, because of all this content, you think that's the sphere of the world, and everybody also wants to be, you know, part of the, of the world, and that's the problem for me, and we had that discussion at some point when Facebook was still called Facebook, not Meta, and, uh, with, um, Um, years ago on a LinkedIn, uh, uh, event in, in London, um, the, the, the, uh, economy, the economy of attraction, right? That's what we, we used to call it. Right. That's the main KPI. Uh, we said, listen, this is not a good KPI because, uh, and to be perfectly fair, we chose it because it's the most, uh, directly linked to profits, right? However, if you chose a KPI of what makes people happier. It might take longer to come up with the profits, but it would sustain your profitability. And your, your audience a lot longer, right? Um, and I, you know, I, and if, if, and going back to what I was saying, maybe it's not a startup, maybe it's an organization. I don't know, but hopefully if a startup can find a way to showcase that this is how you can make money by making people happy at the same time. I think no one's going to deny using that KPI. If you show them, uh, you know, the business model behind it, I don't think people are inherently bad or greedy and that turns into a lot of badness, but, uh, you know, it's a, it's a matter of the, again, of the framework, um, and, you know, behaviorally speaking and abuse of using behavior, right? It goes back to ethics again. If, uh, if I gave you a magic key, right, I gave you a USB stick and said, the minute you plug this into your laptop, uh, there's going to be code that's going to affect all of the major platforms, social media platforms, and, you know, news, uh, let's, uh, aggregators of news, et cetera, and we'll brainwash everyone. Will affect them behaviorally to be better. Would you use it? It's still behavioral.

Johannes Castner:

I would want to really know about the KPIs. You know, what does it mean to be better? That's...

Christian Lazopoulos:

Exactly. Well, okay. Let's say that, you know, we're being, uh... You know, benign or be, you know, I would give you a 50% increase on us being better to our fellow human You know? No, I, I would you use it so, because it's still going to be behavioral, you know, um, um, uh, alteration,

Johannes Castner:

manipulation, manipulation. Yeah. Yeah. No, no. It's, uh, but you know, everything again, see, this is, this, manipulation is another one of those terms that is a very interesting one, right? Because even language itself, You know, please give me a banana or something like this. That immediately is a type of manipulation, right? You're asking someone to give you something on the other side of the room, say, or whatever, and that there's a form of manipulating them to give you what you're asking for, right? So any kind of language is, has that sort of manipulative tendency, right? I mean, that's just part of what we do, right? It's part of communication. I think really important is this difference between, will this communication act benefit you? Or is it actually, is it, is it a hostile or a beneficial benevolent kind of act of communication, right? So if it's manipulating you to your own benefit, or at least this is the intention, then I think it's a, it's a different quality of manipulation than if you're doing something, you know, um, Covertly to actually simply get someone to buy more, even cheeseburgers. Right. I think that's a problem because cheeseburgers are not good for you and you shouldn't be asked to eat more of them. And in fact, you know, if something manipulates to eat more of them, it's, uh, I think it's an issue.

Christian Lazopoulos:

Yeah, definitely. Um, I mean, my favorite joke, uh, when it came to psychology, when I was still in uni. Was, how does a Greek mom change the light bulb? She just sits in the corner and says, Oh, don't mind me. I'll stay here in the dark. Don't worry about me. You know, the, uh, the kids change the light bulb. Oh, it's that's manipulation, right? I mean, the end of the day, uh, and if, you know, when you're a parent, you, you, you realize. How much manipulative, uh, you get to be with, with kids in order to be,

Johannes Castner:

Oh, and they are manipulative to learn that very quickly. Yes.

Christian Lazopoulos:

You know, the parents and, um, you know, there's this big realization when you become a parent that it's okay to lie a lot of the time, because there is benevolence behind the lie. Some, you know, in most cases, right. But, uh, also sometimes you just show. You know, especially young parents. They're so tired. They're so, uh, up to their ears. They're, they will lie to make things easy for them as well. So it's human nature. It's just, uh, you know, hopefully, hopefully just being one step ahead of your bad, um, drivers and your. Um, and your bad motives because everybody, I think in most cases for 90% of the time knows what's good and bad, right? Knows what's right or wrong. Uh, there are very few things that are so ambivalent that

Johannes Castner:

I don't know. I mean, you know, if you're going into the area of. sexual behavior among young people, right? You have a large variety of opinions there, you know, going from, let's say, you know, the, the, the leaders of Iran, right, who have a very specific opinion on what. qualifies as legitimate sexual behavior. And people in San Francisco have a rather different opinion of it. So, um, I am with the people of San Francisco in a sense that I think that we should have complete freedom over our own sexual behavior the same way that I think we should also have complete freedom as to what we put into our own bodies. But I do not think we should have. You see, I think that's, that's my view on this, right? So, but I think you can disagree with this.

Christian Lazopoulos:

I mean, uh, I'm with you in that sense and I'm, you know, obviously I didn't think like a religious leader of Iran or something like that, but you know, in your heart for yourself and how you might do some, something, someone else, I think in most cases we will know whether it's right or wrong now, of course. You are the outcome of your surrounding in your context. So, you know, it could be that I truly believe that by burning someone because they had, uh, sex before their marriage, I'm doing the right thing. But I, I don't think even, uh, even zealots at some point go against the better, you know, universal human, human judgment. But again, there's no way of tricking the, this is a, again, a philosophical discussion.

Johannes Castner:

But then when you, when you build AI, right, and, and you are releasing it on a global level, you were alluding to this earlier, that if there are some real contentious issues, you know, for some reason, your experience involves something that could be, you know, different for people who identify as different genders or something like this. I can't think of anything in particular right now, but.

Christian Lazopoulos:

Um, in that case, bias in AI has been detected in terms of racial bias, right? Um, just today I was reading an article on some collective of artists that wanted to showcase that mid journey. It is very biased in its output versus people of color, specifically, uh, black people. Listen, uh, in the end of the day, it's, uh, at the moment, any tool, any creation that is not self aware, going back to the original question of the discussion. It's going to be as good as the creator, uh, and this is something that, uh, has been realized forever. If you remember, I mentioned about Talos, the robot in ancient times and, uh, it's been written back then that Talos was, uh, very susceptible to the human, to the, uh, female nude form because If Festus, the god that made him, was susceptible to it because he, it was his creation, he had the same biases and the same weaknesses as he. So even back then, people realized that, you know, we are, we cannot sometimes escape. Who we are, what we are because of the people that made us what we are, right? And that goes back to parents, to society, culture, all of these things. So AI is still operates under the same constriction, biases, et cetera, until the point where it can evolve itself.

Johannes Castner:

Yeah. So I have this thing along those lines, you know, sorry to interrupt you, but right around those lines, I'm thinking if we build it together, but if we are really driving this democratization thing forward to, to the extreme in a way, then We can involve the, the, the, the judgment of many different types of people, how they want to be treated by the AI or how they want to interact with the AI and how they might want to be affected by it. So I feel that, you know, and of course this is a very individualistic point of view as well. This is already dicey because, uh, you know, maybe in some places it should be much more about community than about what you get to define. Being an individualist, I think that, you know, we should let people really interact with the AI and actually direct its ethical direction, because I think no ethical school of thought is really complete in a way, and I think that people should have a right to Thank you. Demand a certain that their own ethic is applied when, when they're being affected by the algorithm. I think there is something about that. What, what, what do you, what are your thoughts on that?

Christian Lazopoulos:

If it can be done with equal access access, it'd be great because again, um, And I might, uh, you know, putting it down and not try, uh, make it very practical because then again, um, all the data points will be more democratic in the sense that everybody influenced the, the outcome. If we say that there's going to be a huge, brilliant AI that would be able to have access to every single one of us and understand us. And see what makes us think what's good for us, what's bad for us. Because, uh, you know, a lot of people do not only ask what's good for them. They cannot not help. They cannot, but help us for things that are bad for them. As long as there's 8 billion of us roughly at this point on the planet, right? If, if that imaginative AI. Has access to every single one of us somehow and get that input. I think that would be a brilliant thing because then, and that's where my optimism comes in. Uh, I think if, when all things are equal and you can get as close objective as possible, I don't believe that anyone would be bad, to be perfectly honest. If there's that individuality, that self, um, you know, uh, Um, that, um, I look, looking out for myself only that, uh, benefit that self preservation, uh, that goes bad, uh, a lot of the times, uh, is being nullified by numbers, 8 billion, right? And someone can understand what's universally good. I don't think that there's an option to then be bad. The only problem with that logic, and I recognize it in myself. Is if such a big, uh, you know, um, entity rises, where will it draw the line of what's good or bad? Because if it just stops with us great, I'm very optimistic about our future and our coexistence with if it includes other species, animals, it includes the environment as part of, uh, being, uh, having the same rights of existence as us. We might be in trouble because we're not very good towards other species or ourselves or the environment. So this is what everybody's, uh, you know, afraid of in the, in the end of the day. However, again, I'm being very optimistic about things like that, because I think that if there is a system in the end of the day that will at some point have total access, it will find a way to help us become better. Because why not? I mean, I... I don't presume to understand what potentially the thinking or the motives of such a future thing, you know, uh, maybe a completely different consciousness to us and way of thinking or, um, uh, you know, uh, but hopefully because it will be at some point created by us and still have a connection to us, uh, I'm optimistic about. Um, how it can benefit us in the end by, um, seeing the bigger picture. Because in the end of the day, all of the theme of this conversation is the bigger picture and really connecting all the dots in the end of the day. So, I'm, I'm, I'm, I'm, um, mindfully optimistic and carefully optimistic.

Johannes Castner:

For the audience to take away from this week from this conversation? Is there some big points that you would like them to, to hold on to?

Christian Lazopoulos:

Um, don't be afraid to, uh, to participate because, um, for the last few years, technology has made life easier and more convenient and that's a great, and everybody's talking about, you know, uh, user experience and making things frictionless. Thanks. Smooth. And to be fair, you know, with a touch of a button there, or, uh, you know, with a, um, seamless journey, you can enjoy many things that you couldn't even a few years back, right? Don't let that lull you. Don't let that, you know, put you in a, um, sort of a paternalistic, um, mood with technology, you know, the one thing to just don't worry, everything will be taken care of, um, that platform, that brand, that application. That system, uh, because of what we're discussing, because you not then putting your full, um, if you like, influence, uh, as to what things will be in the future and you will, um, instead of, uh, having, you know, instead of controlling things that will happen to you, just allow things to happen to you. And you might not, uh, you know, uh, like it. So I would say that instead of seeing this just, uh, our evolution, I mean, instead of seeing this just as an easier life, uh, look at it as an easier life, uh, in terms of having more time to think and, and, and better yourself rather than just enjoy things only, or just, you know, write. With go with the flow, uh, because again, historically, whenever societies had some level of, uh, um, some level of being able to have time to think and. And, um, and progress. That's when leaps and bounds in society, uh, have happened. I'm not talking about just political, right? Uh, technological, but also political, cultural, et cetera. So, um, don't let technologies just, you know, uh, use you and, uh, or, or, uh, cuddle you. Just, uh, you know, be more active with what you have because if life's life becomes better and easier for you, then have, you have more opportunity to do more with it.

Johannes Castner:

That's great advice. I think. So Christian, one last thing I want to ask you, and that is how do audience members and people who want to know more about your work, how do they keep up with you? How do they stay in touch with you and learn about what you do at the Newton laboratory and also on your own and other projects?

Christian Lazopoulos:

Well, I'm a very traditional link about, uh, definitely Newton's, uh, the Newton's dot gr is the output that we have, but, uh, if they want a more, uh, direct and personal, you know, uh, input or, uh, you know, uh, connection, LinkedIn is probably the best way to go at it. Um, that's where still I make most of my interesting, uh, new, uh, connections and, uh, friends or business partners.

Johannes Castner:

I want to apologize to my audience for not having published this show in the recent weeks, because I have, um, uh, started a new job at the Kingston Business School in the Brain's Lab. So I've got run a bit behind in my editing. I will get some support from my university to do the editing and this will put this show back on, on a weekly schedule. in the fall. In the meantime, in the summer, things will be a bit slower. The next time the show will be published, hopefully next week, I will be meeting with Naveen Sharma, and we will be speaking about collective intelligence and community intelligence as he understands them, which is in terms of devices rather than in terms of human to human intelligence.

Naveen Sharma:

So community intelligence is something like, uh, Someone learning something, uh, someone doing some innovations, and then other people, other uh, human, uh, humans are learning that thing, they innovating it thing, they improving it, and then they correcting it. And this is how you are making a very good, one of the best, uh, solutions for any problem. This is how it works in our society. The same way. In technology industry, devices are excellent. Softwares are excellent. They are doing good in artificial intelligence each and every day. New innovations happening with the help of AI. New models are there, they are doing good learning, but there is no cooperation, no coordination, sorry, no coordination and not at all, there is so, so each and every organization or device if they want to learn something, it's their own learning. What we are doing in community intelligence or collective intelligence, if a device is learning something, some AI is going on it, the learning is shared with another devices in a mesh, which having another learning capacity. And they can re can utilize that learning to improve the learning to use the learning in their process and based on their learning, they can implement the new things and that new learning can come back to the original creator.