Utopias, Dystopias and Today's Technology

The Challenge of Ethical AI: A Virtue Ethics Perspective

February 22, 2023 Johannes Castner
The Challenge of Ethical AI: A Virtue Ethics Perspective
Utopias, Dystopias and Today's Technology
More Info
Utopias, Dystopias and Today's Technology
The Challenge of Ethical AI: A Virtue Ethics Perspective
Feb 22, 2023
Johannes Castner

The rapid speed of technological innovation and the slow speed of ethical learning make it challenging to ensure that AI and other technologies align with sound moral values. Powerful tech leaders, who design these systems, may not fully understand how they will be used globally. Olivia Gambelin, a virtue ethicist, suggests taking a virtue ethics approach to ensure that the moral virtues embedded in AI systems reflect the values of the people who use them. Developing inclusive and thoughtful AI is crucial to ensure our technologies align with sound moral principles.

Olivia Gambelin's website: www.oliviagambelin.com
Olivia's TedX talk: https://youtu.be/H9Esi2kDUsc

Show Notes Transcript

The rapid speed of technological innovation and the slow speed of ethical learning make it challenging to ensure that AI and other technologies align with sound moral values. Powerful tech leaders, who design these systems, may not fully understand how they will be used globally. Olivia Gambelin, a virtue ethicist, suggests taking a virtue ethics approach to ensure that the moral virtues embedded in AI systems reflect the values of the people who use them. Developing inclusive and thoughtful AI is crucial to ensure our technologies align with sound moral principles.

Olivia Gambelin's website: www.oliviagambelin.com
Olivia's TedX talk: https://youtu.be/H9Esi2kDUsc

Johannes Castner:

Hello and welcome to the show. I am Johannes and, uh, I am very happy today to, um, be speaking with Olivia Gambelin, who is the CEO and founder of Ethical Intelligence. Um, she, which, which is a, which has essentially two parts to it It is, uh, both, a company that, um, that has ethics as a service. And, um, uh, uh, and, and Olivia will, will tell us more about what that is. And then it also has an expert network attached to it, which I probably want to join. So then here, uh, I I also wanna say that I met Olivia through a common friend of ours, Nick Larson. And as we were speaking, it was apparent that she should be, uh, on the show. Before we get into the conversation, I want to clarify a few terms that will occur in this conversation, frequently. I do this via the Stanford Encyclopedia of Philosophy, from which I will read a short paragraph. Virtue ethics is currently one of three major approaches in normative ethics. It may initially be identified as the one that emphasizes the virtues or moral character in contrast to the approach that emphasizes duties or rules deontology or that emphasizes the consequences of actions consequentialism. Suppose it is obvious that someone in need should be helped. A utilitarian type of consequentialist to be sure will point to the fact that the consequences of doing so will maximize wellbeing. A deontologist to the fact that in doing so, the agent will be, Acting in accordance with a moral rule, such as do others, as he would be done by, and a virtue ethicist to the fact that helping the person would be charitable, or benevolent. This is not to say that only virtual ethicists attend to virtues anymore than it's to say that only consequentialist attend to consequences or only deontologists to rules. Each of the above mentioned approaches can make room for virtuous consequences and rules. We will see this in the course of the conversation, when Olivia Gambelin will often refer to outcomes, which is a consequentialist idea. And that confuses me at some point. You will see that as well. So I often refer to the Stanford Encyclopedia of Philosophy, as a quick reference when I am not sure about a term or when I want to know a particular approach. Of course that is not to say that this is a substitute for books or for the western canon or for any of the canons in, in philosophy. There's also something you could do in between those two, a short reference versus reading all the books. And that is to read Bertrand Russell's, uh, A History of Western Philosophy. This is a highly recommended book. It's recommended by Nom Chomsky and by many other, uh, intellectuals This book has also made its own contribution to philosophy. Um, the, and outlining the philosophy of, of Bertrand Russell, who contrasts his own philosophy with those philosophies of the historical philosophers of the Western cannon. And so then I wanna also say here that I really, uh, find this, uh, quote quite, uh, thought provoking that you have on your website, which says that, uh, good technology does not take advantage of our human nature. It's tech that helps us embrace the nature of being human. And so, for that, I, I wanna start the show right away with asking you, what do you mean by human nature? Because this is a, a hotly de debated. Issue whether it exists at all, and of course from Noam Chomsky, we know that there is a certain limit to what the range of human experiences can, can possibly be in, in language. And, uh, probably also an ethical thinking. And therefore there must be something to it. But, what it is, it seems to be quite contentious, where, where it lies. So please give us a, a little bit of an introduction of what you mean by, by human nature and, and how does it apply, um, in, in technology.

Olivia Gambelin:

Absolutely. And thank you for having me today, Johannis. It's great to be here. Um, so to unpack really what I mean by human nature, this, I guess the best approach for this is starting with artificial intelligence, actually, which sounds a little bit funny of a place to start, but, um, artificial intelligence is based off of really kind of our understanding of a. To me, what is a limited scope of human intelligence? How we process analytically. Um, but that is not just our only way of processing as humans. We also have emotional inputs. We have emotional reasoning. Um, we have s spatial logic, we have s spatial reasoning. We have different types of intelligence. And one of the types of intelligence that we have as people is actually moral, ethical intelligence. And this is our ability to, um, sense and feel well both motions, but also sense, uh, morals. So the difference between right and wrong, good and bad. Um, our intelligence and being able to understand these, this, this dichotomy that exists in life. That's what is essentially our ethical intelligence. Not actually to quote my company name, um, that's a little bit different, but to me to, to bring it background to your question, uh, what it, what I mean by human nature is the combination. This, the group of all of our different types of intelligence. That means that it is incorporating our ability to logically reason our analytical processing it, it incorporates our emotional understanding. It incorporates those existential questions we have as humans of what the heck are we doing here? Um, it covers the whole gauntlet. We as humans are messy. Yes.

Johannes Castner:

But all of them. So all of us, you just said, for example, these, the, especially what is right or wrong or what is the good life. Tho those questions are not, they don't have a natural answer to them, do they?

Olivia Gambelin:

No. I mean, yes and no. What is a good life? A good life and what all define a good life is, uh, is a life full of purpose, um, and fulfillment. What that means for each individual person is different. That's what I find very beautiful about it, is there is a universal truth in the sense of, if I find my life fulfilling, then it is a good life for me. But what I find fulfilling in life is going to be very different from what you find fulfilling in life. We're gonna have very different good life. but we do all have the potential to have a good life. Um, so it's, it's allows for the, the adoption and flexibility from person to person. But we all are still in, in this crazy thing called life in, in it together trying to figure out what is, what is my purpose and how do I fulfill that?

Johannes Castner:

So, so that's fascinating to me because, so, so here we have virtue ethics to me means, or, or it strikes me to mean that there is something that's virtuous that, that we have in common that is kind of, you know, beyond the individual. But then you, now with your, with your answer to that last question, you strike me then now to be a, a pretty radical individualist at the same time, is that, is that something you can reconcile what is virtuous seems to. in a, the, the question even of virtue seems to me, I mean, as a, uh, uh, as sort of a be defined in a, in a social way, socially constructed, in other words,

Olivia Gambelin:

there are definitely social impacts. There's, um, social influence, that's, that's for sure that's not something to be ignored. Um, but d depends on where the individual is and what kind of societies that impacts what our individual needs and fulfillment are. Um, so you have that type of feedback coming in. Um, but the interesting thing for me is the virtues across different cultures, across different societies. We do see patterns in what kind of virtues are praised and what kind of virtues well leave you feeling good at the end of the day. I'm going back to the emotion in that sense. Um, for the exam, for example, the virtue of being honest, there's right time and right place to be honest. Um, there's right degrees of being honest, but generally, Across the board across different societies. You see honesty as this is, this is something that we uphold. Um, so you, you there is that,

Johannes Castner:

well, I wanna push back on this a little bit because I think that there are some cultures that I come across that seem to think that if you are honest, you're a sucker. Which just really actually, and another type of ethic that says you ought to just get what you could think of it as an ethic. It seems kind of unethical to me immediately. But, but there are communities that hold disbelief that you should not be a sucker and therefore you sometimes have to lie or be dishonest. And that's part of their ethical system in a way. Is that, is that supported? Is that supportable? Is that reasonable enough to be called an ethic or a virtue?

Olivia Gambelin:

I wouldn't call that necessarily a virtue, but it is an understanding, it is an, an ethical lens. It's, it's, um, let's say a framework that people, that people work through. A lot of times philosophers will put forward the idea of the little white lies. Where do those lie? Um, if you are more of a deontological thinker that you're gonna say, well, that's a lie done. That's bad. That's you're being dishonest. Um, in terms of utilitarian. Okay. You, depending on how it comes out at the end of the day, you could tell that lie, so on and so forth. Um,

Johannes Castner:

it's very dangerous. Yeah. Utilitarianism is basically, well, well, if the lie serves the greater good for the greater number of people. Then we're good!. I can, I can say that. That's always the

Olivia Gambelin:

exactly. Yeah. Versus with, with virtue ethics, which is something that I think is really fascinating is it all exists on a scale. So you've got. Um, basically think like the absolute candor, kind of like the Dutch, where they'll just say what's ever on their mind. That's extreme honesty in one side. Um, but then you have the other end where it's, where people just tell lies. They hide the truth. It's shady. Um, virtue ethics has everyone on that scale. You're, you're in between these two would essentially vices if they're taken to the extreme. Um, what you're trying to do as an individual with, again, the feedback, the influence of the society around you is figure out where you need to be on that scale. So what is the right amount of honesty for me in this scenario? So it doesn't write out the possibility of, okay, I could tell a, a white lie in this moment because of X, Y, and Z influences outside. Not because it justifies, uh, means to the end or ends to the mean or anything in between, but it's looking at me right now. I can say, um, I can tell a little white lie to my friend that no, we didn't. Plan a surprise birthday party. We didn't plan a birthday party for you. When we plan a surprise birthday party, that's an okay, that's, that's a good white lie in that moment because that's the right amount of honesty that I'm telling in that moment. Um, oh, the interest, it's which I know we can go onto into too without that can snow.

Johannes Castner:

It's just slippery slope. Is Santa Claus real? Should you tell your children whether or not the, those sorts of stories and even if you know that, for example, that some religion is mostly made up. Should we, should we actually talk about that? Or, or is that an infringement on someone's religious, how to say, um, is it rude? Um, yeah, it's a, it's a really interesting slippery slope though. And it seems to me that the answer to that is very, there's a great variation of what the right answer is to this question. And when we build software that is made to scale mm-hmm. the danger there is that, you have to place yourself. You're forced in a way, if you want to treat everyone the same way, which is also questionable of whether you should do that because there are different demands, uh mm-hmm. So maybe services should be much more tailored to local conditions, is something I've talked to, um, uh, I guest recently, um, on, on the show about, uh, I mean he, he told me about that. Uh, so, um, it's really kind of an interesting dilemma, if you will, right? Because when we built software, we, we, we endow it with some sort of ethical. it, it, it will be embodied with some ethical, uh, system that somebody has to decide on. What is the virtue that we want this software to portray? And what is that? Where does it come from? It is, it can be become very, um, it become, can become paternalistic is what I think, it's, it is a danger. Don't, what do you, what do you think of that?

Olivia Gambelin:

Yeah, there's also some fear as well. Um, it's a smaller line of conversation, but there is a line of conversation and responsible ethical ai, uh, this space about the fear that because Western cultures are so, uh, engaged in this topic, and essentially all the frameworks and regulations are coming out of the western cultures that we are embedding Western ethics and western values, um, over and above. Yeah. In any other region, which, you know, there's, there is some truth to that, um, to. The interesting part to this is we're definitely, that scale factor puts us into a whole new philosophical thought experiment, if you wanna call it. Um, but we were never necessarily meant to have this level of scale. So we are facing these new questions of what do we do when we're at this scale? Because in the past, you know, our, our, our impact, our level of impact was confined to our, our social, our society, our, our, the people around us. Um, which in that case it was fine to have the same value system because we all had the same value system. We're in the same society.

Johannes Castner:

Well, it was, if we're not segregated right, then it becomes actually, it can become a very local, uh, uh, point of contention. If you, if you have a very non segregated society where many different cultures come into contact in the same place, like London for example, right? Where you don't have this idea of homogenous, uh, homogenous idea of what is virtuous. It can be radically different across. Yeah, across the street from where I'm sitting.

Olivia Gambelin:

Yeah. Which is, which is a great example. It's like London at scale all of a sudden. And we're trying to design technologies so that it fits all of these different, these different understandings of ethics and society of values. And that does become tricky. Um, this is where I like to fall back on the idea of being of universal truths and universal values that we are trying to pursue. You see these starting to arise a little bit in regulation, like this universal understanding of the need for transparency in our systems so that we understand what's happening in the systems, um, or our universal need for fair systems. How that looks in each society is different, which is where part of the problems of that translation are happening. Um, but we do have those universal values that we're all, that we're all chasing. Um, when it comes to the change in the societies, I, this is a question that still hasn't been solved, but. I'm very fascinated in this idea of agency in terms of societies and cultures where there is an ability to adapt the technology to the actual, um, culture instead of the culture having to adapt to the technology. W a great example here actually is through a company called BlahBlahCar. Um, they have, they're, they're car sharing, not car sharing, um, ride sharing, um, kind of like, I know they're gonna hate that I say this. I have, I have a friend, that good friend that works at BlahBlahCar, but it's like long distance uber, he hates when I describe it that way. Um, but it's app-based. And in the majority of European countries, you pay in the app just like you would on, on, for like an Uber, you pay BlahBlahCar in the app. Except in Germany. In Germany, they had to switch over to a subscription model for access. But that was because people in Germany liked to pay by cash. They were paying for these rides back in cash. And so I use this as an, i, I think this is a fantastic example because BlahBlahCar went in. Instead of saying, no, you're gonna pay in the app because this is what we've decided and this is how you have to adapt to us. They went, okay, you're in Germany. We get that. This is how culturally car sharing works, or ride sharing works so cool, we'll adapt our technology to fit how you already function and you work.

Johannes Castner:

So BlahBlahCar, car is, is, is what it's called. Car, yeah: BlahBlahCar. I think it seems like a simple approach that it, it seems like it shouldn't be that farfetched in a way, but Yeah. That's, that's often not the case So when you, when you have the apps, they just sort of one fits all. Um, and there is, local feedback is discounted to zero as long as the, the revenue comes in, right? It's, it's often very revenue based and, um, yeah.

Olivia Gambelin:

But the, the interesting there thing there though is that is a missed opportunity if they're not adopting. Mm-hmm. for example, back to BlahBlahCar, if they hadn't adopted to the German system, their use in Germany would be cut by at least 50% because people didn't, they're like, it's fine. I'll just find a different way to do a long distance car ride and I'll pay in cash. Cuz that's, I prefer to do that than through the app. Right. Um, so not adapting to those local ecosystems within reason, you know, never, as you say, when you're building, um, software, you don't listen to all the feature suggestions. Otherwise you'd have this monster mm-hmm. Um, but if you adapt within reason to the local culture, even just within continental country-wise minded, then that does actually open up a new potential in terms of revenue. It makes your, your, your business, your, your product more absolutely attractive. Mm-hmm. because it's not so Ford.

Johannes Castner:

Yeah, absolutely. I think in general, if you're solving people's problems and you are reasonably sure not to hurt them in other ways, you will do, you will do better than, uh, you will make more profit in the long run. This is what we're seeing also with these companies who were very, uh, I wanna say have a lot of hu hubris in, you know, just pursuing their agenda. And at some point what you see is it's not paying off. Actually, there is this point where, where it's not paying off. So, then virtue ethics is, can, can be adopted to local con can be adopted to local, um, conditions. And it's not necessarily a universal virtue, but then how far can it go? How far can it go? There are some virtues in the world that I would say I find very problematic. I don't know. If you're not a virgin then, I, I don't know. You, you know, where it can go. There are these virtues that I find very disagreeable, and why do I find them disagreeable? Why do they don't, why don't they find it disagreeable? And, and what can we do about these sorts of things? Where, um, where, where we have somewhat of a, I don't want to use the term clash of civilizations, because I think that's a over overhyped idea as well, because I don't think we're clashing as much as, as that idea would have us clashing, but some, in some areas we do have some real contentions, right? Real differences. yeah. In, in what we would consider virtually, right? How it is to be right. And, and at some point I would say that a responsible as, ethical, responsible, uh, technology provider will draw the line in the sand and say, okay, well your virtues are not working for me. And, and this is very, it's very tricky and a very difficult, can of worms to open. I realize this, but I think we have to open all cans of worms.

Olivia Gambelin:

Yeah, absolutely. Um, let's look, for example, at women's rights, we've got the protests happening mm-hmm. in Iran right now. And based off of the regimes, um, let's say ethical framework that they're working through, um, women do not have the same rights as men. Now, let me introduce something called moral maturity. Um, this is this concept of over time we as humans, we learn about our, our values. We learn about what we should value, uh, what our virtues should be. Um, it is a process that takes place over centuries, over over decades. Um, Uh, dec, decades and centuries. But essentially it's, I have tried this virtue, or I've tried these actions. I don't like the outcome, therefore I need to change my actions to align better with, with, um, say something that I, that I want or feel. And so I mature in my understanding of my morality, but

Johannes Castner:

the problem is that some people do seem to like those outcomes and then they, they push back. And they want to go back to some, some outcomes that we have generally thoroughly rejected. You see this in America, you see that in, I guess Iran is, is, is still a strong example of this. They, they had a much different, more market, market oriented, much more, how to say, a freer society. And then they went and, and they went back to these other virtues and they are now, and, and, and I would say a large population in Iran feels strongly. Behind those other virtues. And I think same similar thing is actually going on in the United States as well.

Olivia Gambelin:

So we've got maturity in terms of individual, but we also have maturity in terms of society in general and how we build on it, the case with Iran. So what we've done as a, as international on a global scale, as we've come to an agreement actually on human rights, what we call human rights, those are values that we've agreed on a global scale need to be respected. Um, and the case with Iran is they're actually against how they're treating women is against the human rights of the women, which is the problem there where we can actually say, I can say as an ethicist, no, that is not right. It doesn't matter what your ethical lens is, that is not right.

Johannes Castner:

So, so human rights is also a very specific ethic to, to, to let our listeners in on this. Human rights is not actually it's, it's another approach. You could say utility as opposed to utilitarianism or, or, uh, as opposed to, and I'm not sure. How does that fit in? How do human rights fit into virtual ethics as a system?

Olivia Gambelin:

Yeah. So how, and so let me, let me explain it in this way. Um, if I were working with someone, uh, as an ethicist, working, working with them on their, on their system, and they, we were talking about human rights, I'm using human rights as the baseline of like, you need to fulfill these human rights. Otherwise, we cannot enter in, into any discussion beyond this. We cannot look at any other virtues, um, beyond this. Essentially what it means and how we also incorporate regulation when it comes to work in ethics is we're establishing baselines of this is the least that you have to do. Mm-hmm. Um, if you are doing this, you're one step above illegal, great. Doesn't mean that you're actually achieving the ethical standard that you want to, but you do need to reach this base level. Um, and human rights are a great base level because they are recognized on a global scale. And these have been developed over time and tested over time of, yes, this makes sense. Mm-hmm. um, that women and, and men have the same rights, uh, and that are equal in society. We've te we've tested that over the time we've agreed we're trying to get there, but we've agreed that this is, this is good.

Johannes Castner:

Okay. Then let me ask another question that's related to this then. I think, I think it is related to this and I, I don't know if everyone, and it's a bit controversial cause there's a lot of people working in, in business in, in Dubai and, um, United Arab Emirates, I even had to guess on, uh, recently on, on, on the show. Um, who, who works in the United, um, Arab Emirates. Is it even ethical to do business with other groups of people who have a virtue system that is so opposed to the, the current, the one that we have and you say is globally agreed to, but I don't think that is actually true in, in the case of, uh, the whole, um, Arabian Peninsula. United Arab, Arab Emirates, uh, Qatar, um, Dubai, Saudi Arabia, they seem to have, to me, different virtues and they don't agree Yeah. To the universal ones that we're talking about. So it isn't really quite universal in that sense, to be honest. Right. So if we have to, we have to say that there are some people who are not within that who, who don't agree to even human rights. But they are a big force in technology development at the moment. They have a lot of money that they're getting through. Um, Well, traditionally, oil, but, uh, they're, they're shifting now. They're, Qatar has a lot of, uh, um, natural gas for example. And, but they're doing also other things and they're doing a lot of things in the technology space. So how do you feel about that? When, when, when we are confronted with them?

Olivia Gambelin:

Yeah, I had to say, it feels, um, similar to the question as an ethicist, when people ask me like, would you ever work with Facebook or Google? Cuz that's, and, and responsibility. Those are kind of like the evil companies as well, where we're like, oh, we can't, we don't align with them, we can't work with them. And my answer there is no, it's, it's not wrong to do business. It's not wrong to do, to do business, um, say with, with, um, with different cultures that you're not necessarily in agreement with their virtues or companies. But you have to know, as an individual going in, you have to know where your lines are, that you are not going to cross. And that's where it becomes tricky. So if you are doing, if you are doing business, um, and something happens where you realize, okay, well you, you've violated this human right, or you, you've crossed my line. You have to have enough bravery actually, and enough backbone in that moment to say no and push back. Um, which again, is very difficult. So it's not bad to engage in business. Um, everyone's got a different, different place that they're coming in from. But it, you do need, you're gonna need a strong backbone, uh, going into that. Otherwise you will compromise your own ethics, your own self. Um, if you do not know where your lines are, and you do not have the courage to, to say no in those instances.

Johannes Castner:

So you're saying that the, the, the bottom line is that the, the minimum line should be human rights and, and, and so they should be respected by whatever project. I suppose that you're engaged with. So you have to be at least conscientious of that as, as a, as a minimum. Um, so that's, that's, uh, and then beyond that, whether you're virtuous or not. If you, if you engage in, in, in partnerships with people who have radically different systems of virtue, and, and then also if you're serving customers or clients that have radically different systems of virtue, I I, it, it must become a bit tricky.

Olivia Gambelin:

Yes. Oh yeah. I'm not, I'm not denying it. And it's very, it is very, very tricky. That's, that's what I mean of it's not wrong. It's not wrong to engage. It's going to be very, very difficult though. If you are actually going in and saying, I'm going to, I'm going to stick to my, to my values, it becomes very, very tricky. Um, so that leaves it to the question of, is it. worth, is the money worth you compromising your values? Uh, that's really what the trade off is that you're making.

Johannes Castner:

Absolutely. And so then I also wanna ask you this question because I feel pretty strongly that, when we build ai, for example, or other technologies, that one thing that we should ask instead of asking, is it good or is it, uh, is that, how does it shift power in the world. How does it basically give some people more power? And how does it perhaps even take away power from others? And how does that work? How does it distribute, uh power in some way? And what do you think, and how do you think that is related to virtue

Olivia Gambelin:

power shifting in terms of virtue? Oof. Um, I. So there is a very strong conversation around AI and data itself and how that there is a lot of power concentrated in who has the data, who has the insights, who has the strongest technology. Um, we're seeing tech companies have greater power and influence than governments, which is fascinating. Um, the interesting point in terms of virtues, and let me phrase it in this way, is whoever is implementing these mass systems, um, that people are, are following or beholden to, it's the virtue system or the ethical system of whoever has developed this system that is actually being put in place. So in that case, the power rests in still in who has, who has the data, who's creating the ai Because they have power not only of influence, but they have power of this is what we will value. On a global scale, not even, this is why I as an individual,

Johannes Castner:

yeah. So, so couldn't you say that that in itself is not virtuous? That it just isn't a virtue that a few individuals or a few big, um, how to call them guggernauts, juggernauts, have all this power and that it isn't distributed, and that in fact we should, and this is what, what, what, um, uh, several of my guests have now proposed we should decentralize and engage in a more sharing economy or collaborative economy, uh, whatever that means. And, and there is, uh, a show directly about this particular, uh, topic with Gupta. And so it, it may be, you could think of this in terms of virtue. This power dynamic or this power distribution as a non-virtuous, according to some systems of virtue, it must be wrong, but I don't know, how that relates to virtue, uh, virtue ethics?

Olivia Gambelin:

Yeah. I think, um, I think now we're actually diverting a little bit more into political philosophy. Um, but I do agree in the sense that I am a proponent of the decentralized movement. Um, I do think that having these centralized points of power and control is not healthy. Um, and it's funny cuz it's not like it's, it's power is not a virtue. Power is not something like, oh, this is something really good that we need to uphold. No, it's, it's, it's something on a, on a different scale. It's not a virtue. Um, but the virtues of the individual in power what's getting communicated. Mm-hmm. and there is a lot of questions around are the values of the people in power right now, actually the values that we want. And, and you know, I'm, I'm also a little bit skeptical that that is true,

Johannes Castner:

but is it only a matter of value because to me it's almost, it's very difficult for Jeff Bezos to imagine, for example, what it is like to grow up with his technology in Kenya, for example. Having contact with Amazon and Kenya growing up, something like that. Right? It would be impossible for him to understand even if his virtue, if he was a virtuous person.

Olivia Gambelin:

Yeah.

Johannes Castner:

So it seems also to be just not just that it's their virtues that are being embodied in this technology, but also that it's their understanding that is embodied in this technology.

Olivia Gambelin:

Yeah, I would actually say that in some of these cases, um, what is lacking in terms of virtue is the virtue of humility, which is a virtue. And in that case, you. And it's humility in the, in the sense of recognizing, um, I do not have that understanding. Like, as, as I'm Jeff Bezos, I do not have the understanding of a kid growing up in Kenya with access to Amazon. I don't have that understanding. Therefore, I cannot make under, I cannot make informed decisions about that person's experience and how I can affect them. It's having that humility to understand that I need to listen first and understand how other people have, how other people experience, um, both their life and, and my technology. And know that I do not have all of the right answers. I have the right answers for me and my, my own personal life and my own, my understanding be

Johannes Castner:

that seems to me, a huge tall order. So they have to listen to, because the, the other object being that it should be scalable. That seems to be a virtue in, in San Francisco, uh, where you are

Olivia Gambelin:

Yes, that's true.

Johannes Castner:

So if, if you want something to be scalable, Then that means that you want something to serve millions and billions of people and maybe make, improve their lives. So there is also, a utilitarian aspect to this. So to say, well, what can we, can we afford not to do it? Because it might really bring a lot of improvement to people's lives if we do this sort of scalable big approach and then sort of run over the virtues in a way of others, because we cannot actually possibly listen to, I don't know, 3 million, 20 million, 40 million people, I don't know. But they, we want to serve 40 million people. So how do we bridge this gap?

Olivia Gambelin:

Yeah. Well, the first question I would ask is, is it an improvement according to you, or an improvement according to the group you're trying to serve? That's a huge question. If I, as Olivia think that, um, I can bring improvement to a young boy living in China, and I can improve his life according to my standards. Am I actually improving his life according to him, his life. It's his life. Um, so that's always the question that I like to, to flip on its head of, well, whose improvement are we going by? Um, but no, I don't think this runs counter, uh, counterproductive to scale. I think what I'm trying, what I'm trying to get across here is with scale, with that level of scale, we increase, uh, our responsibility for actually listening to not 3 million people, but groups and societies and, and cultures within that 3 million. Um, to understand how we are actually impacting where you see, okay, this is taking off in a small rural town in China. Okay? What is actually the experience of, of that, that small little town in China as we're taking up? Um, so yes, it is with greater scale comes greater responsibility if we're gonna. Play words on, on the Spider-Man quote. Um, but when you reach that level of skill, it is, you don't get to come in and say, this is how it's done. You actually, you do need to listen to, to the people you are serving and adjust for how it is being, how they're being served. Um, I think that that's the point again about power, is you don't and that lack of humility, you don't know what's best for other people you can help and provide.

Johannes Castner:

At this kind of scale you should really not expect it to be very hierarchical, the structure and you should actually expect the structure to be much more, distributed across the, closer to the stakeholders. I think that that is sort of my view on that.

Olivia Gambelin:

Yeah.

Johannes Castner:

Um, but, but, uh,

Olivia Gambelin:

which again, creates better, better services and, and products.

Johannes Castner:

But yeah, it requires a lot of thinking and, you know, uh, I guess that's. And there's a lot of different angles to, to, to go about it that I'm learning about as I'm going along So this is, it's very good. But I wanted to ask you one more question here, uh, about measurability and metrics. So we live in this world where we expect metrics, we're, data scientists and, we mean to some degree we are all data scientists now. And, uh, we, we, we need to, we need to know what we are measuring. So how do we measure virtue? It seems to me that it's in conflict with the idea of measurement. Vir virtue that is

Olivia Gambelin:

Yeah. There it is difficult. You can't measure, you can't go in saying I'm the most humble Um,

Johannes Castner:

oh my god yes, that sounds like an oxymoron.

Olivia Gambelin:

Exactly. Um, but there, there are other ways to, to measure. Uh, there is a great line of thinking that I really enjoy, um, which is the triple bottom line where you're looking at people, planet, profit, all in the same, all in the same scores. So, um, if people are suffering at the price of profit and planet is suffering at the price of people and so on, um, then it's not a successful system. But instead putting those three on the same playing field, that itself, there's virtue sewn into the seams there of we are, we are honoring life when we are honoring environment that's being sewn into just how we're, how we're measuring our productivity, how we're measuring our, our metrics. So there are ways to embed it into, into how we currently,

Johannes Castner:

so, so what, from what our metrics, lemme ask you this from, from what, from what perspective or how are you, what, where do you derive the inspiration for your, for your virtue, for your. From a tradition. From what tradition? Or how do you, because I, I think when you say people, planet, profit, it seems to me a little bit some of that

Olivia Gambelin:

Yeah.

Johannes Castner:

Sounds like it could come from, from, for example, indigenous communities. So I don't know how, how do you derive, where do you derive your, your, um, your virtues from? What, what is the, the say, let's say the, the most inf the, the, the books that influenced you the most or the, the authors or the, the experiences that have influenced you the most in, in, in your pursuit of virtue.

Olivia Gambelin:

Yeah. Okay. Um, and I, I, yes, I, I, I really like this question and I wanna stress here though, that I've, I have influences on my personal understanding of my personal virtues. Um, but when it comes to my work as an ethicist, one of the things I do is detach from my own value system. uh, and I take a critical look at the individual's value system, the society they're in, the, the regulations that they're against. Um, so as an ethicist, you actually have to practice detaching from your own ethical framework to be able to understand where the other person's coming from. Um, but my, my own personal ones, um, I think actually one of the biggest, one of the biggest influences, um, growing up is I love CS Lewis. Um, I discovered CS Lewis through the Chronicles of Narnia, which is a great, great, um, kind of like the Harry Potter fandom, um, it's own little series. But CS Lewis was a, also a philosopher. Um, and he has some great works like, um, pieces on grief, pieces on, uh, human understanding and, uh, just these fantastic approaches to understanding life and emotions and morals that I think, um, he's got one book called The Great Divide, which is essentially about what is the, the Catholic church's understanding of purgatory.

Johannes Castner:

Mm-hmm.

Olivia Gambelin:

but I absolutely love this book because it's about these souls that they're on their way to the afterlife, but they're stuck at this kind of in between space. Um, and it's the story of one soul going around and talking to the other souls of like, Hey, why, why are you here? What's going on? Um, and each of the souls that he's talking to, they're all hung up on something in life. They're all hung up on. One is a mother that lost a child, um, that lost a child, and she's so focused on, on the loss of this life that she wasn't able to actually continue living her own life. And even in the afterlife, she's still stuck on this point. Um, there's ones where another, he's talking to another soul, and this, this soul, um, had was blood gluttonous. So constantly eating, constantly eating, and was so focused on that, that. They, they didn't fo they weren't able to fully experience life in other ways because they were always focused on more food, more food. Um, but I absolutely love that though, because it was looking at in life, you know, what are you so attached to that you're missing the bigger picture that, that you are sacrificing everything in your life to, to have this one thought or this one, one need or this one, um, one aspect. There's no balance to it. So I think, eh, not only that book, but a lot of writing by, by, um, CS Lewis. I,

Johannes Castner:

is it particularly Christian with them, right? It's Or or is it Catholic you even said,

Olivia Gambelin:

yeah. Yeah. So I, I'm, I'm, uh, was born into an Irish Italian family, so I had no choice but to, to be Catholic growing up. Um, but yeah, a lot of my early understandings, um, and virtues came out of the Catholic church. Um, I read a lot of Thomas Aquinas and seven Deadly Sins. I think it's fascinating. Um, so I, I know for, for my personal values, a lot of it's heavily influenced, uh, by, by Christianity, by the Catholic church. Um, I say to take everything with the grain of salt because on the flip side, all of my philosophical understandings that, uh, through my, through my various degrees of, you know, I've studied everything from Nietzsche to Kant, to, um, Mill, to Hume, all of the different perspectives. And I think what that taught me between my personal value system growing up and then the value systems that I've studied is that there is no one right answer. And that we need to balance, um, that you need to draw on all of these different aspects and to understand as well, it takes a level of self-awareness, but to understand, you know, this is where I'm heavily influenced, this is where my, my values are heavily influenced. And, and to understand if maybe I'm talking to you, Hans, and we're disagreeing on something and, and I realize, oh, well it's because I've got this influence on my value system.

Johannes Castner:

Absolutely.

Olivia Gambelin:

And you've got an opposite one. Doesn't matter.

Johannes Castner:

Well, yeah, but you're building a, way, I suppose, right? You have in this dialectic between, what is coincidental, uh, about your experience, what were you exposed to? In a way you could say that your parents being Catholic is, is somewhat coincidental. We're not coincidental. It's cultural. It's a sort of given, given it's, um, and then there is this conscious part where you, where you, where you feel drawn to certain ways of thinking. Is how my experience is And, and then, then there are also somewhat traumatic experiences potentially that play into this

Olivia Gambelin:

Yeah.

Johannes Castner:

As well. And how, what, what moves you and what pushes your buttons, basically in the, in, in the space of ethics. That's my experience in any case

Olivia Gambelin:

I think. A good way of, of, or how I try and describe it is, um, we as humans, we're still learning about morality. We're still learning about ethics. It's not a hard science yet because we're still figuring things out around it. Uh, the joke and philosophy is, it's called philosophy until there are rules and laws, and then it's a science. Um, so morality is not a science yet because we're still figuring it out. And it's kind of this, it's got this layer of mystery, which I find to be beautiful. Um, but it's got this layer of mystery. And so we as humans, we're just trying to interpret it so we've got different value systems.

Johannes Castner:

I think there seems to be something fundamentally different to science though, in the sense that there are, there seem to be some irreconcilable or not ir, irreducible, irreducible differences in axioms that make this thing.

Olivia Gambelin:

Yeah.

Johannes Castner:

It makes this thing sort of non. Solvable in a unique way. Even in the long run. So there is not, I think even a cons conceivable way to bring things into alignment, of all sorts into alignment, because there will be some, some things that will always be, not really quite fitting well. So whereas in science, you, you can, you can always, you can optimize a function and you're optimizing one metric, which is, to of, of, of explanation. In science, usually.

Olivia Gambelin:

Yeah.

Johannes Castner:

But, but here you are, you are trying to optimize things simultaneously that, that don't really fit. I think this is, this is. for example, in the case of, of bias. So we, we say we don't want an algorithm to be biased in one way or the other, right? So, so then we have, we have two ways of being unbiased and, and, and I brought this up in, in, in another, um, in another episode as well. So, and you, you, you, you have the bias of, for example, in, in recidivism I think is an interesting case here.

Olivia Gambelin:

Mm-hmm.

Johannes Castner:

So, so you have two biases. You, you, and in addition to that, you have the rule that justice should be blind to a lot of categories.

Olivia Gambelin:

Mm-hmm.

Johannes Castner:

so, so, and, and exactly what that means is very difficult because there's a lot of correla, you know, as, as, as Shamika Klassen has, has, stated on, on that episode. What, what is, what's the problem here is, um, you have a lot of things that correlate with the protected classes, if you will. So even to be blind is not so easy. But then if you're blind, how can you know that you're not biased? You can't know, right?

Olivia Gambelin:

Yeah, exactly.

Johannes Castner:

So, so that is

Olivia Gambelin:

which is, uh,

Johannes Castner:

very paradoxical, right? So there's this paradoxical nature to, to ethics, which seems to me even in the long run, irreconcilable, unsolvable in that way. no matter how long we left,

Olivia Gambelin:

potentially.

Johannes Castner:

Well, it seems to me like you cannot solve this problem, right? It's just, it's pretty clear that you cannot both be blind and unbiased.

Olivia Gambelin:

No.

Johannes Castner:

Or, and be sure to be unbiased. So these, these are not things that, that work well together,

Olivia Gambelin:

but maybe down the line in the future, we find a way where maybe that aspect of being blind is not actually what we need to be valuing. Maybe that need that it actually needs to be changed itself. Um, maybe we do actually need to be aware so that we do provide justice across the board, uh, in a way that seen or appears that we are blind to, to different factors.

Johannes Castner:

You see, that's the thing. I mean, if you're not really blind, right? You might be, you, you might, it's just, it's, it's a very tricky problem. It's it's not something, something I, I might, regardless of tonight, but, uh, or about or today. Um, Depending on where you are. Um, but, it is just something to think of and I feel like there is something also, even, even in science, let's say we now know thanks to Gödel

Olivia Gambelin:

mm-hmm.

Johannes Castner:

that there, it is impossible that we will ever have, um, an answer to every question. So every question that has that, that's true; that has true as an answer, for example, cannot be necessarily proven to be true. And this, this is going to be perpetual. So they're, and, and, and in ethics, I think you're, you are even dealing with a much more, dicey problem. I think that, that, than, than you're dealing with, with when it comes to science questions.

Olivia Gambelin:

Yeah. Who knows? I mean, we may have a branch of ethics come out that's more scientifically based, who knows? In thousands of years we're still, we're still developing. Um, and, uh, I find, yeah, I, I find that ethics and morality is this great frontier of, of thought that we are still, we've been exploring for thousands of years and we still are slowly but surely making our way through it. Um, and to me that's something really beautiful that yes, in my lifetime we're not gonna have all the answers, but I can work and I can provide thought and experience to maybe get us a little bit closer.

Johannes Castner:

So how do you feel then about this is a good, a good little probe for another question. How do you feel about the enormous piece at which we are developing technology and we are, we are, you see, I, it seems to me that there's a, a mismatch in the way that we are developing in terms of thinking about ethics and the way that we're developing technologies and just trying things out.

Olivia Gambelin:

Yeah. Well, I think in this case, so maybe it's a bit controversial, but I do think in this case, what we're doing right now is we're trying to apply our current. value system, our current understanding of ethics to our technology. Yes. That is in some cases actually just flat out missing in our technology, which that needs to be fixed. Um, but overall looking at, at the technology, we have this great opportunity of as the speed and the caliber at which our technology is being created, what can our technology actually tell us about us as humans? And our, back to, to the original question, our human nature.

Johannes Castner:

Hmm.

Olivia Gambelin:

For example, we were saying, oh, we're fine. We're fair. We're fair as a society, and then we have these systems being built, and we're looking at it going, what is all this bias in here? That bias is not in the, in the system, it's in our society. What that should be doing is not us. That's not a technical problem. That's not, we need to fix the, the system. It's, we actually need to fix society. We're not actually as unbiased, we're not as fair as we think we are. Um, so that is the opportunity there of short, let's expand and let's grow the technology. But what is, what is the technology telling us about us as a society that we can grow as a society? And again, feedback into the technology. Um,

Johannes Castner:

but it does real things there is that it affects elections that are, suddenly.

Olivia Gambelin:

Yeah.

Johannes Castner:

Suddenly it starts affecting elections. Suddenly it starts affecting things in the world. It's radically undoing or redoing our life as we are starting to think about what is the good life? We are at a very fast rate transforming it. That could be scary from a particular perspective. I mean, if you say, if you're thinking about these things together, it could appear as a very scary thing.

Olivia Gambelin:

Yeah. Yeah. I think it, it can be very scary. Um, but we need to be able as a side both, on a societal and an individual level, have the backbone of, Hey, we tried that, that did not work. We do not like that outcome. Let's change it. Let's not keep doing it. Um, I think that's what's the scary part is we're lacking that backbone right now.

Johannes Castner:

So, outcome, but outcomes that's like, you sound now like a, like utilitarian a little bit because. Because virtue should

Olivia Gambelin:

maybe a little in that case, but

Johannes Castner:

Yeah, because virtue should be independent of outcomes? Even.

Olivia Gambelin:

Yeah. Well, you've got, think of it as this, this in this direction of you've got your first layer. I'm making decisions on a system that I had never done before. I'm experimenting cool, but at least I'm experimenting within my frame of reference, within my virtues. I am not making, I'm not making any decisions that compromise my baseline. Now I'm looking at the outcomes and those outcomes, oof. I didn't actually like those outcomes. Let me adjust what my baseline is. And that feeds back in. Um, and, and we, we grow from there. The problem is, is we're missing the baseline to start with, and then we're not even taking in the feedback from the outcomes. So we're just missing this entire, essentially it, it's the mechanism of moral maturity. We're missing that muscle right now in how we're developing technology.

Johannes Castner:

That's, I think we're still are in a, in a, a great way. So you are, you are trying to fix that, right? With uh, with

Olivia Gambelin:

slowly but surely chipping away at it.

Johannes Castner:

Yeah. So, so, so am I We are, we're both in the same boat and I, I think that, a global discussion is my answer to it, in a way to start that and to talk about, how technology serves us in different areas of the world to bring that closer to home in a way, to bring the world closer to home in terms of how technology works. It's definitely a learning process. All of it must be, and I think that's, uh, an ethic in itself, the ethic to, to learn and to, to discover. And hopefully we are able to build some guardrails though if we are developing very fast.

Olivia Gambelin:

Yeah.

Johannes Castner:

It seems to me the speed difference is, this, this difference in speed in terms of what we're developing our. Thinking of, we're, we are always running behind bad outcomes, it feels like, and it feels like that is

Olivia Gambelin:

mm-hmm.

Johannes Castner:

so maybe even going so fast is, is not a virtue, I don't know how to, how, what do you think about that? Do you think in terms of, innovative, the speed of innovation in terms of a virtue, is that something we can choose or is that something that happens to us

Olivia Gambelin:

at the moment? It feels a bit like it's happening to us. Um, I think a lot of our structure has been built around, well build fast and breaks things that became its own virtue. But I think we're at a time point in time where we, we actually need to question that look at and go, is that a merchant?

Johannes Castner:

We've broken it.

Olivia Gambelin:

No, that's not, it's causing

Johannes Castner:

Ouch.

Olivia Gambelin:

Yeah, we, we broke a lot of things. Let's, how about let's, let's, let's change this to fix things. And we don't have to move at this at a snails pace, but we should at least fix all the things we keep breaking. Just, you know, It, it, it's looking at different values, if anything to an extreme is always gonna be a vice. We've taken speed to an extreme, to the point where it might actually be working against us. It's okay to develop innovatively, slightly slower. Um, if that means we are developing innovatively, not in terms of the technology, but in terms of our, of our ethics and, and the values of our technology are developing at the same space, at, at the same pace of our technology. Like there's, there's something's out of balance here. And I think as a society, we're still trying to pinpoint where that, where that imbalance is. Um, but that's kind of what we're struggling with in terms of that speed. We've taken the speed a little too far. How do we, how do we bring it back a little bit and what, what needs to be placed in balance with that, that that speed.

Johannes Castner:

I have a feeling it has something to do with the framing. We, we often talk about, Technology as if we were in a race with, say, China, for example. So if we're, if we're talking it in, in those terms, right?

Olivia Gambelin:

Yeah.

Johannes Castner:

If we see it as a race, then, we are naturally Exactly. Exactly. Then, then speed is, is an enormous virtue, because we're afraid of falling behind.

Olivia Gambelin:

Yeah.

Johannes Castner:

And I guess that is tied into this as well, and I don't know, to what degree do you see that as a virtue? Do we have to be ahead of China? What does that mean to be ahead of China? And do you, do you think that that is an ethic to be ahead of China

Olivia Gambelin:

No, that's not, that's, it's definitely not, not, not a virtue to be ahead of China. That'd be a very interesting virtue. Um, but it, it, it is a, a frame of mind and I think, um, I think with it, the question like you were just saying is what does it mean to be ahead of China? Is this. Uh, in terms of quantity, is it in terms of quality? Is it, you know, do we need to have as invasive and pervasive technology as China? Is that actually the direction that we want? Or do we want a different, a different type of,

Johannes Castner:

well, surely Military will play a role in this. So because we, because a lot of technology is tied to military, innovation, if you will, on how we can bomb each other better. And we have to be sure that we do it better than the Chinese, because who knows, maybe they'll attack Taiwan. So do you have any thoughts on that? Is that not tied to the speed of, of innovation actually in some weird, maybe perverse way?

Olivia Gambelin:

Yeah. Yeah. But then I, I would like to pose the question here is do we need better weapons or do we need better security? Can we actually have innovation in terms of how, in our security rather than in our weapons? Um, When we are trying to combat against the weapons in China, they, it, it's looking instead of like, okay, I'm bringing, I'm, I'm bringing, I think it's a song that goes, I'm bringing lemon to a knife fight. It's like, do I need to bring a bigger knife or do I need to bring a lemon and confuse the person and do something completely different? Like it, it's, it's innovation does not just mean doing what the other person is doing better than that other person. It could be doing something completely different and in the opposite direction. Um, but having similar outcomes or, or, you know, innovation doesn't have to be, I'm gonna build a bigger gun than you. My innovation could be, I'm gonna build better security.

Johannes Castner:

Brings me to another question I've been grappling with. Is it not also true that innovation isn't just a monolith So we, we, it seems to me that we are innovating not enough, for example, on, on really important questions. Having to do with climate change, having to do with adjustments and working conditions having to do with, with human, with really important problems for humans that we're facing. But we are, we're actually innovating a lot in sort of gimicky kind of things. And we're spending a lot of energy in those gimicky kind of, Innova what we call it also innovation. So it seems to me we, we are using the same term for radically different things.

Olivia Gambelin:

Yeah. I, I think a great point here is with, um, the ChatGPT I've been in discussion with, I, I've had, I've been talking with friends of, I mean, what does this actually has in terms of, um, application of is this innovative, um, or is it just a really cool chat bot, you know, we're using it for media. Is that innovation or is that just a new type of media? And so these questions of like, all the innovation that we have, if it's just going into better advertising and more content creation, is that actually innovation or are we just doing more of the same thing at a bigger scale? Um, no, I think, I think we're at a point in time where we need to actually question what, what is our definition of innovation as a society? Is it, you know, better targeted ads or is it something completely different?

Johannes Castner:

Yeah, this is precisely a question that I want to tackle on this show. This is, definitely, and what, what it means in different contexts as well. What does that actually mean, innovation, what should it mean, what can it mean? What has it meant, and so on. It's, not a matter of, how do we define it, but what does it do for us? I think, and, it's more, it's the way I think of it, I think it is innovative if it does a lot of things that are really useful for us. I don't necessarily include bombing other people better in, in that, in that category, so I, I don't, don't know that, even if that's measured in the gdp and you could say there's this concept of Solow-residual, which is an innovation index in macroeconomics. And, if we have a lot of that, it measures roughly something like the speed of ideas, and how it improves in, in a way that it contributes to the gdp. But this thing will not really tell us what this innovation was used to do. It, it, it will tell us how much money it's brought in, in some way. And I guess that, that sort of means in some way will tell us something about the economy and how it is doing for, people and jobs and so on. But it won't tell us really who at the end of the day will consume this innovation and do some what, what they'll be enabled to do. with this innovation. And in a way, we, we would want some measurement of that, right, of that innovation, like the speed of innovation in a way that it expresses human empowerment. And we don't have anything like that. What, what, what do you do? What, what are your metrics that you advise your clients to, to look at?

Olivia Gambelin:

It really is client dependent because what we have to do is take a combination of, here is what's required by law. Um, here are factors of, of, usually we bring in factors in esg. Um, but then also let's discuss with you as a company what your goals are and how do we deduce from there what kind of metrics you should be, you should be embedding into your culture. Um, it's really client dependent, but we have the, that baseline of what is legally required. Um, what in terms of ESG are you lacking in. Um, and then as a company, uh, in terms of your, your vision and, and the values of the company, what can we do deduce from there is in terms of metrics that you should be, um,

Johannes Castner:

so, so do you agree with this? Because I, I think that in the end of the day, ethics is sort of, it's all about what metrics you, in, in large part is about what metrics you optimize, right? Because for example, yeah, the optimization of attention leads to crazy, foreseeable side effects. You know that if a racist comes into a bar and that's something outrageous, racist, we will all pay attention to this person. Because we are wired to pay attention to them. So if, if an algorithm is given as its objective to maximize engagement, it will do weird things like recommend racist material. And, and, and this is you could say that the ethics are embedded, embedded in the metrics that you optimize. Is that not true? And how does that relate to virtue?

Olivia Gambelin:

Yeah. Um, so when you're optimizing for different, this is where, in terms of virtue ethics, I don't really like the term optimization, um, because it has this implication or this knock on effect of taking it, taking something out of balance and taking it to the extreme. Um, which is where we start to have a problem, is if you are optimizing for only one metric, you will have problems. You do need a balance of different metrics that you are unquote optimizing for, or you are balancing between. Um, and a lot of it is fine tuning. A lot of it. You need to actually critically think and go, okay, how, how is this optimization? Or how is this focus on this metric? What kind of outcomes are coming out of that? And do we need to adjust from that? You know, Um, with social media, we were trying to optimize for a long time for attention on the screen. We saw actual impact, how that increased suicide rates, how that decreased our sense of, of worth. And we looked, we should have looked at that and gone, huh, nope, that's out of balance. We need to fix that. Something that is not the right metric to optimize for. Um, what can we balance that?

Johannes Castner:

But the machine won't allow you to do that once you're in that point. So this is another thing. There is something bigger about this. Something you could say societal about it. So if you are in the shoes of Mark Zuckerberg and you realize that attention maximization causes higher suicides and all of these other things, and you have, on the other hand you have these investors who say you have to keep increasing the number of users and you have to keep doing this. And advertisements have to do really well because we, we want our return on our investment. So you're in this kind of very stressful situation. Because if you pull the plug on this optimization

Olivia Gambelin:

Yeah. Oh yeah.

Johannes Castner:

Your entire business model will fall in on itself. So you have to in fact, change your business model. This is something I've been arguing for, is that you have to actually design your business model around your, you mean you could say virtues ethics, you know, your ethical principles, I suppose.

Olivia Gambelin:

Yeah. Yeah. I, I say honestly, the more work I, the, the more work I do with smaller companies and startups as an ethicist, the less I'm actually working on the technology and the more I'm working on the business model, um, nine times outta 10, it's either the problems either rooted in the business model or in the culture of the company, not actually in the technology. Right? Uh, which is, yeah,

Johannes Castner:

if you basically, in other words. if you have an algorithm that very effectively max or, or, or optimizes a very bad metric, then more effective, this algorithm.

Olivia Gambelin:

Yeah. The algorithms doing'em great.

Johannes Castner:

in terms of ethics, the better, the better they are, so yeah, you have to actually align your business model with your, with your ethical principles and then derive the metrics from there. And then when you maximize these metrics, they should lead to good outcomes. I suppose. That's, that's also what I've come to.

Olivia Gambelin:

Yeah. And you gotta leave a little room in there of, hey, okay, maybe that that wasn't right, or we need to adjust a little bit from there. You're not gonna get it right from the first start. That's okay. you just need to add the backbone to say we need to change and then we need to adjust. So start from,

Johannes Castner:

well also that depends on the circumstance too. Because if you, if you're, if you're doing something medical, let's say some medical AI yeah,

Olivia Gambelin:

we killed a few people.

Johannes Castner:

Oops, we lost a few.

Olivia Gambelin:

Can't really do that. No, you need to have good foundation.

Johannes Castner:

I think changes depending on, on the magnitude. So I guess I find myself inevitably be a, be a utilitarian in this area where we have to say that, there are trade-offs in every case and the stronger that you end up being, the more sensitive, the more, the more it's at stake in terms of cost as both ultimately and in terms of benefits also. But, so it is a utilitarian calculation.

Olivia Gambelin:

Yeah.

Johannes Castner:

At the end of the day, even though I'm always very much at odds with this approach, but it's sometimes it's irreducible to it actually. Like, I mean, it is re I mean it is,

Olivia Gambelin:

yeah.

Johannes Castner:

You can't escape it. You can't, you realize that it is about trade-offs and costs and, and benefits at some point. And do you find that as well as a, and, and, and so virtue ethics can be well aligned with utilitarianism, then There is, do you see a conflict?

Olivia Gambelin:

No, I don't, I don't see a conflict. Um, what I often tell people is that it's best to actually apply a few different ethical frameworks. So schools have thought to one scenario because then it gives you a better rounded position. I myself, I subscribe to virtue Ethics. I am a huge supporter of virtue ethics, but I still work with people in terms of, well, from utilitarian perspective or deontological perspective. I don't usually use those direct, uh, jargon words, but from this perspective, this is how this, this action or this decision will be perceived. Um, because again, you know, there, there's shortcomings in, in all of these schools of thoughts. I just so happen to think that virtue ethics is, is the strongest, but it does. I'm not discrediting utilitarian or, or deontological. I think there's a lot to learn from those, those schools of thought as well. Um, and in applying them and, and the perspectives that,

Johannes Castner:

so what do you find the strongest about virtual ethics in particular? I have to follow up on this. When you said that, because clearly, one would wanna know what do you see as the, the biggest strength?

Olivia Gambelin:

Sure, sure.

Johannes Castner:

Of, of virtue ethics and what it brings to the table that other systems don't.

Olivia Gambelin:

What I see as a strength and virtue ethics is actually coming into innovation and the opportunity around, um, how we look at our, our values. Cuz with virtue ethics, virtue ethics doesn't focus on, let me avoid doing wrong, I don't wanna do bad, let me avoid doing bad. It actually looks at how do I, how do I embody this value? How do I embody this virtue, um, to the utmost? So how do I be the how, how can I have the right amount of courage in the, in the right situation? It's, it's, Focused on more of this proactive, forward facing, um, what opportunity can be found there? Virtue ethics to me has two side, em embodies both sides of the coin of ethics. It's, let me avoid doing wrong, but also let me do good there. It's ethics is, is the pursuit of the good life. How do I have a life of fulfillment and purpose? A life of fulfillment and purpose is not a life where I didn't get in trouble. it's a life where I actually did good and am doing good and I'm following my, my, my purpose. So for me, that's what virtue ethics has that the other. Um, schools of thought don't necessarily have the other schools of thought are looking at, uh, avoiding bad or not, not, uh, follow or following the rules, or, um, maximizing good at the cost of what can also also be blown out of proportion and, and have some very negative knock on effects. Virtue ethics has more flexibility. It requires that critical thought of actually understanding that, that level of self-awareness of, of understanding, um, which is for me, why, why I think it's personally, why, why, why? I think it's one, one of the strongest schools of thought.

Johannes Castner:

D does it have a particular set of pillars like, uh, you know, principles, if you will, axiom?

Olivia Gambelin:

Um, Well, it depends on what, what base of virtues you're coming from. Um, I guess if we're talking about. Thomas Aquinas here. He's a, he's a fun one to base on. He's got his seven deadly sins and, and the virtues that come out of that. And, um, you could base it on, on those as pillars. Um, or you can base it on, I mean, if we're looking at technology in the modern day, you can base it on the pillars that the high level expert group from the European Union has laid out for, for ai. Um, you can supply the different virtues, you can implement the different virtues through this, through this school of thought. Um, which again,

Johannes Castner:

agnostic to, to a particular set of virtues. It's, it's, it's a, it's a plug and plug and go system. Similar to you could say. Um, well, yeah. I mean, and this in a way, it is a, a little bit like that. So when you, when you have this, what is it called? Um, uh, value sensitive design. I, I think that's, is that related to virtual ethics?

Olivia Gambelin:

it's, I believe actually it is primarily based on virtue ethics, if I, if I remember correctly. Um, because it is, it's designing four values. You select your values and you design for them. Um,

Johannes Castner:

so then I know a little bit about it. Yeah. So that I, yeah, I I also find that an interesting approach that you have this base that you, that, that you how you say processes and systems in place that you can plug in values of the people who are affected. I always think that the, uh, that the, the values of those who are affected should be the ones guiding the development of technology rather than the value of the, of the people who are building the technology. We should be at the service of those who are affected.

Olivia Gambelin:

Yeah.

Johannes Castner:

But how do you feel, is your, how do you, do you, how do you feel about that and, and. Do you have a, a way to do that, to connect to the people who are affected by, uh, by the systems you, you advise to build?

Olivia Gambelin:

Yeah. Yeah. I think, um, what I advise is let it be a conversation. It's, it's a collaboration between the people designing the technology and the people that are affected by the technology. Um, it should be a conversation, a collaboration between the two. Um, you get that through user feedback, uh, through actual understanding of perspectives of people in the culture, in the, in the experience that you are, are affecting. Um, but, but it's important to be able to take in their considerations into how the technology is being built. But then there are certain restrictions to the technology where the people developing it need to say, okay, well that's outside of the scope, or, that doesn't quite work there. That's not how, it's how we, we can make it work. It's neither, it's neither here or there. It's not this absolutely every single need of the, the user must be met or it's only up to the developer. That needs to be a, a conversation that needs to be a collaboration.

Johannes Castner:

Well, I think that covers pretty much what, what I had thought about in terms of what we could speak about. And so I, I guess it's a good time to end it here, but I want to first give you an opportunity to tell the listeners and, and the viewers, um, about whatever you want to tell them. What you want to have them take away with them at the end of the day.

Olivia Gambelin:

Very open-ended. Yeah. Um, no, I would, I would just say that right now it's very tempting to get overwhelmed quickly by ethics or responsible ai, whatever you want to call it. Uh, there's a lot of doom and gloom around it, but I think it's actually a very hopeful. An inspiring field. It's, it's a place of curiosity. It's a place for us as humans to go,

Johannes Castner:

mm-hmm.

Olivia Gambelin:

is this what we want? You know, we, we've reached a point in time, um, in humanity itself where we have the opportunity to sit and go, is this actually what we want or do we want to take control and change it? Um, and that requires a whole lot of backbone and a lot of bravery. But we have it, uh, we have the ability to do this. It's up to us now if we are going to. So, um, I believe we are, and I'm one of the fighters that's, that's trying to help and build in that direction. Um, but yeah, I think this is a great time of opportunity. It's not something to be feared, it's something to run headfirst into.

Johannes Castner:

Fantastic. Well, lastly, I wanna just, uh, uh, ask you, um, if people want to get in touch with you, if they want to stay in touch with you, if they want to find out more about you, where can they do that? And I obviously will have that also in the bottom of the, where the common section is.

Olivia Gambelin:

Yeah. Um, well I am on LinkedIn and Twitter, but my name on both Olivia Gambelin on both LinkedIn and Twitter. Um, you can also check out my website, which is just www.oliviagambelin.com. Um, I usually keep that pretty up to date. And you can also, uh, I think there's a, yeah, there is a contact. I had to remember I built that website. Um, there's a contact form there as well if you wanna reach out. Uh, I do get back to everyone. Sometimes it takes me a little while, but, um, feel free to reach out. I always, I always love discussing, uh, and chatting with people about the space.

Johannes Castner:

This show is published in video format on YouTube and in audio format on Spotify, Google Podcasts, Amazon Music, and many others. Every Wednesday at 5:00 AM Eastern Standard Time, 2:00 AM in Los Angeles at 10:00 AM in London. Please leave your comments in the common section. If you like something, please let us know. And if you don't like something, please also let us know. Give us thumbs up for things you like as well. This helps the algorithm. And also please subscribe so that you don't miss a show and that you keep in contact with us. Join the conversation. Next week I will be meeting with Yasmin Al-Douri and we will be conversing about the Web three movement.

Yasmin Al-Douri:

Web three, and some people even call it Web 3.0. Um, they're sometimes seen as a synonym and sometimes people say, no, they're not the same. Um, basically what the goal for both of these ideas is, is that we have, uh, kind of web that is decentralized from these big powers.