Utopias, Dystopias and Today's Technology

AI, Human Cloning, and the Future: Exploring the Boundaries of Singularity and Transhumanism

April 19, 2023 Johannes Castner Season 1
Utopias, Dystopias and Today's Technology
AI, Human Cloning, and the Future: Exploring the Boundaries of Singularity and Transhumanism
Show Notes Transcript

In this episode, host Johannes and Christian Wilk engage in a deep conversation about artificial intelligence (AI), human enhancement, and the future of society. They discuss human cloning and the potential utopian and dystopian outcomes, delving into the role of AI in art, creativity, and ethics of customizing AI clones.

The dialogue explores the impact of increased intelligence on human behavior, the interplay between rationality, emotion, and AI in decision-making, and the use of AI to reconnect with deceased loved ones. They also examine the pursuit of immortality, singularity, transhumanism, and the wild notion of experiencing multiple realities simultaneously through AI.

As the conversation unfolds, they address topics such as the beginning of the technological singularity, conscious AI goals, and the dangers of over-relying on AI to solve humanity's problems. They touch upon human enhancements, accelerating technology advancements, and the power of simulations in modern engineering.

Throughout the episode, Johannes and Christian speculate about various possibilities surrounding the future of AI and technology, acknowledging the complexity and uncertainty. They encourage listeners to appreciate biological aspects of life while adapting to changes brought about by digitization.

Johannes Castner:

Hello, my name is Johannes and I am the host of this show. I'm here today with Christian Wilk, and we will be speaking about, uh, the, the, uh, singularity, which is a, a point in time at which we can create, uh, an intelligence, uh, can, is created, that can improve itself over time, or at least that's one of my definitions. We'll find out Christian's definition in a second. We'll also touch on the, uh, the, the, the topics of, um, consciousness as well as super intelligences and yeah. So let's get right into it. These are some topics we'll definitely talk about. There will be some exciting, uh, various exciting things we'll be talking about here. Also about Ray, Ray Kurzweil's vision and so on. Christian Wilk after 10 years in Southeast Asia and India now lives and works in Brussels, Belgium. He has a background in computer science and linguistics from the technical university, in Munich, Germany. Uh, he got that in 1995. Christian has been working on AI and information retrieval in the late 1990s for more than 40 years now. He follows the further development of artificial intelligence, synthetic biology, and nanotechnology. He is particularly interested in the long-term impact in trajectories of these convergent technologies and how they will shape humanity. In 2013, Christian published a short paper where he described a possible scenario unfolding on the way to a singularity. This paper we will discuss today. He is currently active in managing publicly funded multinational research projects at the intersection of culture, arts, and computer science. So, um, hello Christian. How are you today? And welcome to the show.

Christian Wilk:

Hello everybody. Nice to be on the show with Hannis here. Everybody. Welcome. It looks like we have a interesting discussion for the next one and a half hours. I'm looking forward to it.

Johannes Castner:

Great. Great. Um, so let's start with a singularity. What is it? Let's, uh, let's get into that. What, what is the, um, meaning of that word, um, and, and the way that you see it?

Christian Wilk:

Well, you already touched upon it a little bit. Uh, I mean, that's basically the common definition that, uh, the singularity will be a point in the future when, uh, Artificially created intelligence will surpass human intelligence. And, uh, the first people who coined this term is actually goes back before, uh, Ray Kurzwell. I mean, we all know that Ray Kurzwell is the one who coined the term and made it famous through his book The Singularities Near, but actually the term goes way back to the fifties. The famous John Von Neuman, whom we all know as the father of the modern computer, because, uh, the computer architecture as we know it, is standard computer architecture, so to say, the mechanical one, which is fully transparent, the one with processors, uh, where data shifted between registered and, uh, data is stored in the ram or on Artis. That's the John Von Neuman architecture. And here, according to the obituary, which was written by Stanislaw Ulam in the fifties, mentioned something like a point in the future where the technological advances and progress would reach a state, which would surpass human capabilities to understand that progress. Then later on, about 10 years later, Someone who worked with Alan Turing together in the Bletchley Park Lab. IJ good. He coined a term of, uh, intelligence explosion, which is kind of similar to the singularity. Cause what he said is that once we reach a point, when intelligence can improve itself, can amplify itself, then this will create an intelligence explosion, which will, uh, be a runaway, uh, development in a sense that it, uh, uh, it happens within an instant of a time. Then that creates a singularity because this, the word singularity actually comes from mathematics. It's basically a function, which, uh, has a term which includes a divi, a division by zero. We all know that It goes then to infinity. And that in the graph when you plot it creates a singularity, which cannot be, uh, overcome. You cannot basically, uh, transcend it cause it goes to infinity and there's a kind of a border to it. And from this metaphor, visual metaphor, Ray Coats while adopted it, uh, to turn to in his book and laid out the way towards this singularity. And he sketches out in the book several ways towards achieving super, uh, intelligence or are many other names to, to this, uh, like agi, artificial, uh, general artificial intelligence, super art, uh, intelligence, et cetera. So extrapolating the exponential growth of, uh, computer speed. How things are calculated. Uh, processing power of computer chips Moore law. He extended that and projected it into the future and came up with a, and also, uh, calculated the amount of pro, uh, calculations it would require, in his opinion, to simulate the brain and bringing these two graphs together. He came up with a point in the future, 2045, according to, uh, Ray Kurzweil, while when we will reach the singularity, when someone else, the science fiction of, uh, Vernor, Vernor Vinge also mentioned a singularity in 1993, and he estimated it to happen in the, in 30 years from then. So it actually would have happened this year, 2023, according to Vernor Vinge. But it seems we are not yet there. So it seems if it happens, we, uh, Ray Ktz has still exist. How would you know be, how would you know, how do, how would

Johannes Castner:

you know that we there?

Christian Wilk:

Yeah. Well, it's a sudden increase in, uh, intelligence, uh, ex an intelligence explosion. What a i g Good described. We haven't seen that because when you look at ChatGPT, it doesn't improve itself. Expon in an exponential way. So it didn't suddenly from one week to the other, um, um, um, reached out into the world and extended its knowledge base on its own by, uh, several degrees. We, we see something like, uh, the first steps. Of a recursive cycle feeding the output of chat g p t into the development of other ais. Like what happened two weeks ago when its Stanford Laboratories basically recreated something as powerful as ChatGPT by using the open source, uh, a a m, uh, a model from Facebook and did it for, at the cost of $600 by feeding output, created by ChatGPT into the, uh, training patterns used for that language model. So it's kind of an ai which is more advanced training and, uh, a basic model and coming up within a couple of hours or days, uh, to almost the same power as the original model. But that's kind of, if you would compare it to the real world, it's kind of inbreeding. Because you are using the output created from ChatGPT, which of course is a closed world model because it can only reason within its own limits, which are prescribed by the data fed into it. And you use that to train another model. So you will always stay within that same, uh, limited world. But coming back to the question, how would you know if we faced a singularity? Well, um, Ray Kurzwell describes a very gradual step towards this, uh, uh, singularity point by describing gradual stepwise improvements of the human condition by using synthetic biology, by using nanotechnology, and slowly basically enhancing the human. Skills, abilities, like, uh, enhancing the vision system by adding night vision to your eyes or increasing the capabilities to hear, and eventually, of course, connecting the brain directly to such an, uh, intelligence, uh, artificial intelligence system. We have all describing a kind of, uh, gradual way towards this, uh, step.

Johannes Castner:

Yeah. So, so then it is the biological humans that basically will increase their intelligence, um, and overcome and, and then basically feed it back into, into ourselves, right? So it's not necessarily a digital. Um, or it could be both. It could be a combination of,

Christian Wilk:

it could be, I think I've, I've broken it down. I could be in, freeways happen in freeways. One way would be the merger, the gradual merger between humans and AI systems. So basically, uh, with the advancement of ChatGPT and the direct link between your brain to ChatGPT evolving together and therefore creating a kind of chime or by, um, using a non merger, basically an artificial, uh, general ai, which develops on its own. And, uh, reaches a, uh, a time when it, uh, excels every known intelligence on this planet.

Johannes Castner:

And to, and this is in behavioral terms, right? Does it have to be conscious? Uh,

Christian Wilk:

That's the big question. Does it have to be conscious? I mean, if it behaves like you and me, if it does everything, what a human can do, you need to attribute it, uh, something like consciousness. If it's not, if it's indistinguishable from any other person or going beyond any other person. What is the, in terms of behavior? Yes, in terms of behavior. What would be the added value of, well, so.

Johannes Castner:

So imagine that you could make chat g p t very, very smart, say, um, then, but it's still based on this idea that it predicts words from other words and concepts from other words. And maybe, um, uh, I'm sorry, like, uh, words from concepts and perhaps it can keep track of certain concepts and creates stories because it can, you know, but, but, but all of that right is, is in a sense mimicry of, of true, of, of cogni. Well, not true intelligence. I don't wanna say that because I think that there is intelligence. But I think that there's a difference between con uh, intelligence and con con, uh, consciousness so con in consciousness. So even if, if a thing could actually tell me all the things that a humans would say, uh, or would say all the things that a human would say, but it, it actually can't experience the color blue or the wind in their face or any of these kind of experiences as a. As a, uh, as an entity as saying, you know, I, you know, as, as having an identity in, in essence, that's, I think they're related. They're not exactly the same thing because it is the identity or the ego that experiences well, uh, that is conscious, right? That is conscious and has consciousness. So what, what that means is basically that the experience and is an inner experience a, a movie. Some people say an inner movie, I don't like that term because an inner movie would be quite flat, but it's sort of a, a, an immersion experience of, of something is immersed, right? That something is the, the, the conscious being, if you will, but you don't need that in order to have all of the be. Of a conscious being. So it's basically fakery in a way. Because you know, if you fall in love with a woman who's not really a woman, but who just predicts that, it would be clever to say, I love you right now. And it says all of these things that a woman might say, but has absolutely no meaning behind it. No inner real ontology. No, no, no. True semantics.

Christian Wilk:

Yeah. No, I get your point. But how, how do you know that's the difficult question. I mean, no, we don't have yet a theory of how to measure or how to identify consciousness. So that's a tricky part. I mean, you assume you attribute that to other people, that they have consciousness. You give them the benefit of the doubt, so to say. Otherwise it would be difficult to interact, to communicate, but you don't really know and to assume,

Johannes Castner:

but you're Yes, yes, true.

Christian Wilk:

Yeah, it's an assumption. And you give them the benefit of the dowel because it would be difficult if you permanently would, uh, be suspicious about your opposite of your, of your conversational partner or your friend or your wife or whatsoever.

Johannes Castner:

But I mean, the moment somebody wants to, okay, so here's my basic test. And it's like, not the touring test, but it's a much basic, more basic test. The moment someone wants to earn money, to me, they must be conscious. Why else would they want this? I mean, do they have an objective function that someone to program them to have has For what? For what? For what purpose? So, so the, the, the whole pur purpose of everything falls immediately out. So I think there is, there's no reason to raise children if you're not conscious. There's no reason to do anything. You have, you will not have intrinsic reasoning, you reasons to do anything. You might be programmed to have reasons. So like an ai, like, like the systems we are currently programming them, right? We we're saying well maximize this function. But that's clearly not its own. There's no, it doesn't have its own purpose. Right? So, so it's very hard for me to imagine that all of these people around me make money just because they're programmed to maximize money or something like that. I, I mean, that to me is very difficult. So because of that, because of their apparent behavior, as if they were conscious, as if they had real reasons to do something like raise children and so on, that to me makes their, makes you hypothesis of consciousness, you know, almost, you know, high enough, you know, the likelihood of that, high enough that I think that they're, they're probably conscious.

Christian Wilk:

Yeah. But again, if you don't really know how to measure it, how to define it, then it's always a, a network of beliefs, an assumption that you, um, kind of, uh, agree to in order to function in a society. I mean, we, before the advent of computers, we never had these questions. Um, because we didn't have anything to, to challenge if you're, cause we, we enter, enter anthropomorphized, uh, our pets, for example, cats and dogs, we attribute to them a certain kind of consciousness. And, uh, coming back to the video you shared earlier, uh, where John Searl gave an example that he attributes to his dog some degree of consciousness because of the similarity of the perceptions they have.

John Searl:

Now the second question is about how do you know about consciousness? Well, think about real life. How do I know my dog Tarski is conscious And, uh, this thing here, my smartphone is not conscious. I, I don't have any doubts about either one. I can tell that Tarski is conscious, not on behavioristic grounds. People say, well, it's cause he behaves like a human being. He. See human beings, I know when they see me, don't rush up and lick my hands and wag their tails. They just don't, uh, my friends don't do that, but Tarski does. I can see that Tarski is conscious cause he's got a machinery that's relevantly similar to my own. Those are his eyes. These are his ears. This is his skin. He has mechanisms that mediate the input stimuli. To the output behavior that are relevantly similar to human mechanisms. This is why I'm completely confident that Tarski's consciousness conscious, I don't know anything about, uh, fleas and termites. You know, your typical termites got a hundred thousand neurons. Is that enough? Well, I lose a hundred thousand on a big weekend, so I, I don't know if that's enough for conscience, but that's a factual question. I'll leave that to the experts. But as far as human beings, I concern are concerned. There isn't any question. That everybody in this room is conscious. I mean, maybe that guy over there is falling asleep, but, but there's no question about what the general, uh, uh, uh, it's not even a theory that I hold, it's a background presupposition. The way I assume that the floor is solid. I simply take it for granted that everybody's conscious, if forced to justify it, I could. Now, there's always a problem about the details of other minds. Of course, I know you're conscious. But are you suffering with the Angst of post-industrial man under late capitalism? Well, I, I have a lot of friends who claim they do and they think I'm, I'm philistine cause I don't, but that's tougher. We'd have to have a conversation about that. But for consciousness, it's not a real problem in a real life case.

Christian Wilk:

In the video he said because the dog has eyes, he has ears, he has a nose. So his perceptions, the way he perceives the world, are similar to his own, because as a human, you have eyes, you have ears, you have a nose. Yeah. Yeah. But he means you move around in physical space.

Johannes Castner:

But I think what he means there is to causal the causal mechanisms that are going on in his own brain. Right. Knowing that he's conscious. So I, I agree with this very much. Cause I think this is exactly how I would think of it. It's like, because I have a brain that looks and feels and is probably very similar to yours and to the dogs and, you know, relatively similar to the dogs. Relatively similar to all mammals really. And because I am a mammal and also because other people study consciousness, so they must believe in it. Where does that come from? Likely from their consciousness. So, so, you know, I, I feel that, you know, I have this brain. I am like this. Exactly. And I think this is what, that, that's what he's alluding to. And because the dog is rather similar to me in all of the mechanistic machinery that I can observe, it must be that he has a similar experience. And, and it's an assumption. I agree. It's not a, there's no test

Christian Wilk:

because I mean, issue of that. Ex Exactly. Because if you would create a robot, robot dog with the same look and feel as his real physi biological dog, then what would be the assumption? Because the worldview of the dog is similar to one of the human, that the dog would have similar views of the world. It, it's very tricky, Tara. I mean, to, to go out there

Johannes Castner:

it is. But what is the likelihood? Okay, so my, my, you know, this is, I think what he also gets at the likelihood that I am the only one who has this experience is extremely low. So basically if I'm the only one who has this experience, but my brain is very similar to other brains and then there's all these books written about consciousness, right? So what is the likelihood, you know, from observing this, like in a Bayesian sense, you know, so, so you probably, you, you, you, you know about Bayesian inference, I'm sure.

Lynden Cellini:

Looking for people who've been stuck in the ocean, searching for gold in an abandoned ship. Logic puzzles, and spam filters in your emails. What do all of these things have in common? Well, they may seem pretty unrelated, but they do all have one link and it's what my talk is on base theorem. Now, base theorem is a specific theorem in maths that essentially measures. People's changing beliefs, and to have this proved in mathematical format with algebra and in probability and for ha to have that prove something very intrinsic to human nature, I think is really beautiful and I think there's a high probability that you'll agree with me.

Johannes Castner:

Yes, yes, yes. Sure. So, so basically I, I, I have one observation of consciousness, my own right. I observe that there's tons of books written about this topic and that other people easily speak about their experiences. And they also have the same architecture in the brains. Now, if they start looking the same because they're built by machines, or let's say by us, we build something that looks like us, behaves like us. But has a totally different architecture, behaves like us and has all the outward things, the likelihood that that thing is, um, is conscious, is very low because we currently don't understand how consciousness works. And it's precisely because of that, that I don't think that something we could have built right now actually has it. Right. So, so that's, that's I think, um, and so it's, it's not that we can't, because we don't understand how it works, therefore it must not be true or it must not be real or relevant. Those I think, are very, very different, different things. Does that make sense?

Christian Wilk:

I agree with you on that. Yes. Yes. No, I agree with you or not. Uh, the only question to me is, uh, If we, uh, if we don't have a bias here in the sense that, I mean, we are talking about something, we pre, on a meta level, I mean, a human thinking about the own way of thinking is, uh, a meta level of thinking. So you are assuming something about, uh, the workings of the brain, and at the same time you're using that brain to think about that inner workings. So it's like a machine leaving its own world to think about it. I wonder if it's possible, and I know of course, I don't know, and I mean, it's not an objector. I wonder if it's possible to simulate something like a consciousness and it what purpose it would serve in humans to have a consciousness. I mean, maybe it's, uh, it serves to hold up an integrity. I mean, we all have a body. We, if we wouldn't have the, per the sense of that is us. I mean, I. And this body plus the way I perceive the world, that is me. If you wouldn't have that kind of feeling, then you would disintegrate, you would not perceive yourself as something that needs to persist in time into the future. Yes. You wouldn't have a purpose. Yes. And it's even

Johannes Castner:

questionable if, if, if, it could even be true that if no conscious being were, were there, that the whole universe actually wouldn't be there. Uh, so, so if nothing can perceive the universe well, yes. That you'll see that maybe the universe itself actually it, and so this is one theory. So, so I think, uh, this is, uh, the one that, uh, uh, um, David Chambers,

Christian Wilk:

uh, it doesn't exist without an

Johannes Castner:

observer. Yeah, yeah. That actually, well, that, that also that it's panpsychism, right? So the idea that actually every, that, that, that, uh, consciousness is actually a fundamental property of the universe and that everything actually has some level of, at some degree, including ChatGPT, but not very much. So it's like sat well, I think in as much of it, little Rock maybe.

Christian Wilk:

Yeah. I think on his Wikipedia page is described, it is described as that he attributes every information processing function. A degree of consciousness, even at thermostat, in his opinion, would then have some degree of consciousness. But that is interesting because. The fundamental duality in consciousness currently in research is between physicalism and uh, information processing. So, Searl, for example, uh, claims that it's, uh, based on physical properties and cannot be emulated in, in hardware or in software on a computer, so that it's, uh, intrinsically linked to our physical brains and our biological, uh, mechanisms.

Johannes Castner:

Well, not, but that, that's subtle. That's subtle because he's not, he doesn't think that it's the only way it could be implemented, but he believes that right now, the, the best way to find out what consciousness is and how it works is to look at the physical brain, because it certainly produces it. And the models that we have don't produce it because it's not even even trying to produce it.

Christian Wilk:

Well, not yet. I mean, I would say because I mean, the artificial intelligence as a research, uh, field is 70 years old. For the first 50 years, it was regarded as wizardry and, uh, not taken seriously. And only since about 2015, since we see the first examples of walking robots, uh, self-driving cars, even very limited ways, and now ChatGPT now suddenly has woken up to the. Fred and, uh, promise of artificial intelligence. But for 60 years it was, uh, work in the making. And it's in that sense, it's one of the youngest, uh, research fields we have. So I don't think it's surprising that we haven't solved everything there is to solve. And consciousness may be the most trickiest nut to crack because if we solve that, we know everything about ourselves and maybe the rest as well. Um, the question is, is consciousness necessary to create an artificial intelligence that is dangerous or could be dangerous for us? Or not even dangerous, but catastrophic for us? Well,

Johannes Castner:

the singularity is do you see that as a catastrophic catastrophe?

Christian Wilk:

Well, I think so. I mean, because it will be the end of humanity as we know it. It will change completely, radically. What I don't like, I mean personally don't like, is that, uh, the, the biological self, the, the, what we cherish now, the possibility to, to be, to go out at the beach and enjoy the sunset or to smell, uh, the grass on the lawn when after fresh rain in spring. I mean, that will completely disappear. We will basically, uh, abandon our biological existence happen in the next 50 years. But this, this fascinating. But how does the long term, let's say projected over 200, 200 years? Uh, I think that's the most probable development trajectory. Why, why

Johannes Castner:

is it that intelligence and increasing intelligence indefinitely or having a singularity in the intelligence, intelligence explosion? Why do you see that as. Um, as, as erasing those natural experiences or the experiences that we have in nature, I, I don't see that it necessarily follows.

Christian Wilk:

Yeah. Yeah. No, I, I, I, I agree. It's, it's not, uh, Obvious lying there. I mean, in the paper I wrote, uh, 20 years ago in, which goes back to the ideas I'm pondering for or playing with since 20 years, I took, uh, the assumption of Ray KZ while series and started from the premise that, uh, simulated. Substant, independent minds are possible, which means that mind uploading would become possible. Meaning in his book, he describes, uh, certain pathway to that, uh, using nanotechnology to replicate the structure of your brain as a kind of software algorithm. And then to be able to run it computationally on any kind of hardware. It doesn't have to be our current hardware, but any kind of hardware as an algorithm. Mo mostly, of course, that contradicts all what we've discussed so far before, because if it's true that, uh, consciousness depends, or intelligence at least depends on our physical structure of our brain, it will not be possible to replicate it as a informational process, but under the assumption that it is possible, what would happen then is you upload the brain and you, you would be able to simulate. Your brain and death of many million others faster than real time. So what you could experience in a lifetime, in 50 years, you could basically emulate in, in a fraction of a time, let's say overnight or in a minute, doesn't matter, but extremely compressed. Like let's say I'm usually the comparison rate Cowell uses is that the computer runs a billion time faster than the human brain. So you would have a compression of time by 1 million. So no, no, but, but so a

Johannes Castner:

lifetime because, so, so you can, you can operate faster, but I wouldn't necessarily, they wouldn't actually move there, wouldn't speed up any of the other processes. Right. So anything else that's happening in the world, let's say, um, elections would still happen every four years. The, uh, you know, just because your brain No,

Christian Wilk:

that's right. I mean, it would, it would create. Oh no, I mean, that's just a starting point. I'm very early in the, in the, uh, in the story. So once that happens, one, once you are able to upload your brain and do, do that with millions of our brains and carry out simulation, you can use that as a tool. Running simulations on societal level, not only on individual level. Cause what we do with intractable problems, which cannot solve, uh, cannot be solved analytically. We have to simulate it to get an idea how the effects will be. I mean, for wherever, for example, of all kinds of complex systems, you have to do that. You go and run a simulation. Now, if you would like to know what would be the best society we could live in, what would be the best economic model? We could have to overcome certain problems, and you don't want to wait until it plays out in real time. You would run a simulation with all the copies of the human brains that exist, then you would, uh,

Johannes Castner:

the, the, the way that I understand simulation to work, the way that I've used it in the past is to simplify, right? To, to actually not have all of the mental capacities of every individual in the model, but to sort of, to simplify it to the points of that, that you need for your theory, right? Or, or that, that you need in order to understand this one process or, or, or even the economy or something like that. You can simplify the behavior, right? You, you just need some beha, well, actually all you need really is behavior. You, you actually won't need consciousness for that, for sure.

Christian Wilk:

No, and, and that's why I'm saying, I mean, conscious is not necessary a prerequisite to create the simulations. You just need to replicate the functionalities of the brain in such an online environment, a simulation. I mean, it's not the simulation. You don't run to prove any theory about you. You just want to see your own future basically, or possible futures. You want to play with 10, 10,000 possible scenarios, how the next 500 years would play out. If you change the parameters 500 times. Let's say you change the, the parameters, socioeconomic parameters, you create different, uh, countries. You stop, you change the laws here and there. You see how it plays out after 500 years because with many of the large scale complex systems, you cannot. Foresee all the consequences. I mean, there are many unintended consequences in the design of complex systems. So how do you know you have to run it through a simulation? That's why you would do that. That is one thing. I mean, to run these simulations, but of course when you come to the point where you can upload your mind, and of course that creates a certain pressure, socioeconomic pressure because, uh, suddenly you as, uh, Johannes Castner you. Increase your income tenfold by creating 10 Johannes Castners and upload nine into the virtual reality. Uh, upload five into the virtual reality. You have four being embodied as robots in different environments and have 10, uh, wage earners at the same time. So, extrapolating our currency issue, socioeconomic system, and, uh, imagining that mind uploading would be possible. It'll not stop at the, at the point, yes.

Johannes Castner:

Again, there would be, there would not necessary to have any consciousness. Right. So, so you could, you could do probably most functions. So if I wanted to duplicate myself literally as a worker, I could do most of the functions. Unfortunately, this is, this is true. Um, without actually having the experience of working, right, of being an ex, you know, you don't actually have to have conscious experience to write books. Unfortunately. I, this, this is not something I would've thought. Actually, I, I wanna say that, um, my, my, um, my prior on this, so to speak, my, my prior belief would've been that you cannot write a book or an essay of any sense without having a consciousness experience being an experienced person. But that has been falsified by, by these, uh, ChatGPT, even if they're not good enough now, I believe now that for sure there will be one of those that can write a good book. So, so now at some point,

Christian Wilk:

the question is of course, what is the, what is the quality of the outcome? I mean, it will not be original thinking. I mean, there will be no original thinking coming out of ChatGPT, so it will re garate. It will regurgitate whatever is fed into it in all possible combinatorial, uh, permutations. And it's fine enough to, uh, to be admitted to, um, high school, uh, uh, uh, tests, et cetera, or to be good enough for newspapers, but it will not advance, uh, our knowledge of the world or increase the knowledge of our world because it always stays within the limits of that knowledge it was given.

Johannes Castner:

Absolutely. I agree with that. I totally agree with that, but I don't think that this is true of all possible very soon to be created machines. So I do think that we, we will switch from the, we already are, we're using, uh, so, so as it stands. Chat. G p t now got a access to Mathematica and it, it can do some reinforcement learning. It's very likely that we in this sort of way, patch, patch away to something that can create new stories, completely new creative things. But yet it's still this missing, this experience factor. Right? So it will still not experience. Yes, yes, I agree. So, so with that saying that you can upload your mind, you could potentially upload something that is exactly like you accept that it is missing consciousness and maybe that you would prefer having those people work because they're not, you know, feeling any anxiety. They will never stop working. They will work 24 hours, they don't need to sleep. So in some ways for working, for doing the functions of a human every day, you could use these sort of extensions of yourself. So I agree with that and that's prob very probable, but I don't see that you will lose. The experiences because I think you still will matter as the only one with the consciousness, right? Even if you weren't the only one, you would still, uh, every individual that has a consciousness will matter from our own ethical code. We can see it when you look at abortions, right? So the question is, how long should you wait before you can no longer abort a child? The next question is, well, when does consciousness start? Right? So the question is never can it drive a car? Can it write a book? Can it be super intelligent and do a lot of great math? But the question is always, when does it become conscious? Which is a essentially different. So it, it reveals itself in this, in this discussion actually around abortion and such things as a, a quality other than intelligence outside of intelligence. So there's intelligence and then there is consciousness.

Christian Wilk:

I think consciousness, uh, manifests itself in a distinctive characteristic, which you can only attribute to that single, particular entity. I mean, that's how you also recognize people, even if you don't see them, even if you don't hear. I mean, everybody has its own characteristic behavior and is one thing, but, uh, way of expressing, way of responding. I mean, you mentioned before pre predictability. I mean, predictability is of course the enemy of consciousness. I mean, if someone is completely predictable, then that somebody would appear as a kind of zombie and we would not attribute consciousness to somebody that acts totally predictable. Well, you would imagine that so unpredictably.

Johannes Castner:

I don't know. So I think that if you, if you observe, uh, if you really have, uh, obs uh, observational superior superiority that you can, let's say you have the, the, uh, internet of bodies, you have, uh, little chips everywhere. You can measure every little twitch that a person has. You can measure everything they say. Obviously, you, their voice, that tonality, the eyebrow twi, is you could potentially fake it. I do believe now, now that what I've seen with deep fakes, with everything else that I've seen, I believe now, and I didn't believe this prior to this, you know, new developments that, that in fact we could fake consciousness. That you could have someone that you would, you would not see the difference. You could interrogate them for two weeks, three weeks, four months, it wouldn't matter. They are still not conscious and they would be able to pretend that they are, uh, because they're built to do that. Right? So every little function that is, you know, so they're built out of many different types of functions, and they're all. Basically each one of them is, is optimized to, to fake one little element of your personality. Right. And, and, and so I could imagine it to work in the faking sense, but not be real.

Christian Wilk:

So basically what you're saying is that it would become impossible to differentiate between an entity which is completely faking it in a real human being.

Johannes Castner:

Yeah. Except for the new measurements that might come out at some point. Right. So I think again, with John Searl there, that once we understand how that stuff works, how we understand how the brain actually produces consciousness, at that point we might be able to measure it, you know, directly.

Christian Wilk:

Well we are provided then. It's true. Yeah. I mean that's the really the big question. If we are looking forward, if somebody solves that riddle. Uh, on what consciousness Depends if it's really an informational process only, or if it is really dependent on physical matter, then yeah.

Johannes Castner:

But that would actually, then would that then, like, so if, if you have this, right, so if consciousness is an ethical sta status, like so reading your paper, right? When, when you're, what you're saying is that this idea of I self ego, atman, whatever you call it is, can disappear because we've building such a great, you know, technology, you know, so there's like, uh, uh, intelligence that, that, you know, that that eventually will just melt into one sort of super intelligence, in other words, if I understand it right, but, but that, but then these, these, these experiences will no longer be valued or something of that sort. You, you'd have to, in my mind, there has to be an ethical. Element to this, because at this point I can see all of our laws. Everything that we are doing is as if consciousness was actually at the center of ethics and at the center of what it means to build something good. Basically, what is ethical depends on consciousness, right? So, uh, if it's a machine, you can destroy it. It's not immoral. It may not be, you know, you might have to pay for it or whatever, but you can blow it up. It won't matter if it is a human or if it's a dog or a cat these days. We care about it. And that's because of consciousness. So I think that the, the leap in your paper that you're making is in this exact thing, right?

Christian Wilk:

Yeah. Yeah. No, I mean, that's why, I mean that exactly what, uh, scares me a bit. I mean, because we, we value our identity to, of being one person, of being one, uh, human being, observing, enjoying the world. But once you start to become multiple Is, multiple persons distributed all over the world in several embodiments, will you be able to retain the I idea of I and I think you will not be able to because a distributed entity, I mean the, the internet. A good example would be the internet. Could you regard the internet as one? It's not, I mean, it consists of millions of parts. So if you have sensors in, in one of your embodiment in Japan, the other one in New York, the other one in the sky, the other one underwater, and you feed all this sensory input into one, assume you, can you be able to process it meaningful? What would it mean? I mean, suddenly to be several persons at the same time, can you then retain the, i can you even retain the idea of one? I mean, once you fork yourself off, so to say, like in GitHub terminology, you create a fork of yourself and one lives off in that particular part of the world, and the other one in, in another part. Another one in the virtual reality environment, they have different experiences. As you say, experiences will differ fundamentally in quality. So if they all have a conscious, of course,

Johannes Castner:

so you're presuming that they, that they are conscious and actually have some experiences, right? Because if they are, why are they not individuals themselves? Why is the copy not instantly a new individual on its own?

Christian Wilk:

The copy will, because it's a copy, retain information about where it was copied from, of course you could, uh, delete that and create kind of, uh, baby, which is already 30 years old and equipped with no prior knowledge of its existence. Yes, of course you could do that. But then what's the point? I mean, you want the fork of yourself for some reasons, because you want to use it for something. So just. Spawning of thousands of copies of yourself without them knowing that they are, were you before it's just creating 1000 more entities. Yeah. So

Johannes Castner:

if they really have a consciousness of, of course you could, you could think of it like, you know, the way that currently that you have, if you have a plant that you can make, uh, little offshoots off, right? Then when you take the offshoot off off a plant and you transfer it somewhere else, then it's its own plant now. Right. It, it has its own identity. Or at least that's how we, we tend to think of it when we look at the plant that we essentially branch off. Right. So, so maybe it has the same Yeah,

Christian Wilk:

because we don't attribute, so even if something else, cause we don't attribute consciousness to plants.

Johannes Castner:

Right, right, right. We. Uh, that's true. That's also true because we don't know, we don't really know, uh, whether or not plants are, some, some people do. Yeah, exactly. Um, uh, uh, I tend to

Christian Wilk:

actually, yeah, yeah. And an Asian, my mythology, for example, especially in Japanese, uh, trees have a life or a, a spirit Yeah. Of its own. Native Americans believe

Johannes Castner:

the same. And I, I tend to believe that as well. I I have no proof. Obviously we have no proof of this, but, but even, even with, let's say your, your spin of a copy of yourself, I still think this copy now owns its own like new experiences. So it is essentially an offspring or another way of, of uh, it's just a new way basically. Yeah. It's a new asexual way to tr to, to reproduce.

Christian Wilk:

Yeah. Okay. But, um, again, if you ju do that just to create new entities, then what's the point? I mean, you want to kind of be the puppet master or retain the link to your offsprings. That's a good question,

Johannes Castner:

because I don't see, if you don't, I don't see much of a, of a point, honestly, because I think that we would prefer to build something that has no consciousness, but then works and speaks for ourselves. Then this is basically a, a, a spokesperson for ourselves without the person, you know, uh, uh, a chat bott essentially. We do like those.

Christian Wilk:

Well, that's a kind of an avatar then.

Johannes Castner:

Yeah. I don't think that we will ever, it's more like an avatar. Well, why would we ever want to create, so this is one thing I've, well,

Christian Wilk:

I hope not, as you know, you don't see it. Well, I see a lot of business there. I mean, uh, even Philip k Dick wrote a, wrote a couple of books about this. I mean, and he did it 70 years ago. Well, I mean, of course, the, the. The temptation must be extremely high for powerful and rich people to create thousands of themselves and to hold onto power and to, uh, keep their businesses up and alive and thriving forever. So I think there is lots of temptation to do that. I mean, that's why I say in my paper, it will not be, of course a democratic thing, but now the whole world and 4 billion people are able to do that. No, it will be the rich first who will be able to do that. And uh, again, coming back to your question before, if they are conscious, I mean, that's the premise from the paper. It starts exactly there. So of course if you, uh, take the premise. But it won't work to simulate, uh, consciousness, uh, independent of substrates. Then of course, the whole thing falls, uh, falls apart. But I mean, that's where the premise starts. You are able to simulate a mind in a That's interesting in some outside the human brain.

Johannes Castner:

Well, I find fascinating about this because I, I, I think that, you know, once you, once you do that, I think you might, so the first rich person who wants to. They will try this out. Right. And once they find out that this other person that they just created is actually not them, it might even have conflicts with them because of slight, you know, new experiences. Slight new shapings. Yes, exactly right. So they find this out, they will have arguments with them and then in fact, Eventually, maybe one of them will kill the other one, and who knows what will happen. You know, there will all, there will be all sorts of stories there, but I think that in the end they will recognize, yes, I know that it's not going to be like having a their own slave that really does what they want.

Christian Wilk:

Because Exactly. I mean, it starts off with the idea to have several of you. Trying to control them, trying to use them for something inspires quickly out of control. However, as you say, because the slight nuances in way, uh, these developed, uh, became becoming antagonistic. Um, not, uh, listening to the orders from the puppet master whatsoever. But then, so of course many things could go wrong there and we'll,

Johannes Castner:

but if we are going to be smarter than we are today, we might foresee that. So that's another thing too, that maybe this intelligence, you see that there is, there's obviously a multiple ways. One could see dystopias arising from the idea to be more intelligent or to have copies of oneself that are more intelligent than oneself and so on. But we could also say that once we, once we are at that point where we see, where we see things clearer, where, where we are in fact more intelligent. Maybe our, our wishes are not going to be this way that we, you know, that, that rich people would want to have a lot of copies of themselves, when in fact they might see that this is not a good idea.

Christian Wilk:

Well, let's hope for the best and prepare for the worst. I mean, true. Uh, I don't see that. I mean, it's a ni it's a nice wish and I, I would, uh, be happy if it would play out as you described. But, uh, looking back at history, I don't see that happen, honestly. I mean, but is it not fair to say whatever can go wrong goes wrong? I mean, it's Murphy's law that

Johannes Castner:

that's true, but, but maybe not everything. Right. So, because if you look at history to explain something that happens with way more intelligent people than we've never experienced, that may not be the right way to look. Maybe the rear view mirror is not the right way to assess

Christian Wilk:

Well, uh, the road ahead. Well, Um, basic motivations have stayed constant over thousands of years. I mean, like, as you said, love is one of them of course, but greed dominating other people. I mean, otherwise we would live, live in a perfect world. If he's, um, basic emotions would not exist anymore. Or if everyone would think rationally for the good of, of all, then yes, this will not happen. But I don't see it, honestly. I mean,

Johannes Castner:

but coming back to Von Neuman right, and the kind of intelligence that we've been thinking about there, that is actually a form of rationality, right? So their, their idea was to become more rational, to build AI and, and to build science and, and the tools of science to make decisions not based on demons and, and on, on, on crazy, um, or, or even crazy hunches or, or, Ego, right? So to over time the enlightenment basically started something ignited, something that if we, you know, I see that, that, that, you know, once, once we become more rational, and I think that actually AI in a sense, the, the non not having an ego, uh, not having a, not having a, um, a consciousness even it's an advantage of AI in some narrow way in that it will make decisions based on some, some rules or some kind of objective function that you set up some beginning and that you never need to worry about emotions in this.

Christian Wilk:

Exactly. That's why I am saying in that paper, I, that's with the end of the humanity as we know it, because it will basically get rid of everything which we now regard as human. In the long term, every feature of what makes us human emotions, feelings being moody. One day you get up like this the other day, like that, who knows why most people cannot explain the, the sudden swings of moods. So that makes us humans and that is going to disappear over long term. And, and I don't think it's, it's, uh, it's, it's for the better honestly. I think

Johannes Castner:

if it appears some decision, I mean, on a totally different, so I, I think this Okay. In this regard, I believe,

Christian Wilk:

well, yes, from the decision making and I mean, there are also very interesting movies. I mean, of course science fiction movies always exaggerate and, but there are, they can be inspiring. And there's one good example for this, I think is Colossus from 1970, which starts on the premise to rule out every emotion and decision making, which could happen. And creating a super computer, which will control the launching of missiles in the Cold War. Yeah, of course it goes wrong. I mean, otherwise it would not be a science fiction movie. But, uh, this fallacy of, of thinking that, uh, rationality is the solution to everything at all. I mean, yes, then you can reduce everything to mathematics, to functions and predictability then, but that's what is the point then if you rule out. Exactly. So

Johannes Castner:

the point I think is in the other things, right? So I think, so I was gonna. If, if it disappears from decision making of particularly critical decision making, I'd be in favor. But if the humanity disappears from music and from art, I would find that tragic, absolutely tragic. It makes absolutely no sense why a machine, it doesn't experience anything, would want to give us something like music. To me, that makes no sense. But on the the other hand, making decisions where, where we, where we don't want the ego to get in the way, where we don't want our, our emotions to get essentially in the way stock trading, for example, or even medical procedures and so on. I think that there are, there are places for an absolutely detached rationalism that, that would help us out a lot. And that's why we, why we maybe in these working machines and these copies of ourselves, right, we could make copies of ourselves that don't have. Consciousness and in some areas they would work much better in a way. So that's, so I'm thinking ironically, and I feel like this is very ironic. So of

Christian Wilk:

course, I mean, once you, once you think about copies of yourself, you, you might not have to equip, um, with your full spectrum of emotions and, uh, abilities, you could narrow it down to only certain, uh, capabilities and skills. I mean, if you create a copy of yourself to be used in underwater mining, I mean, it's always dark down there. Why would you, you would suppress some certain senses and, uh, feelings of, uh, being locked up and so on. Yeah. You might even

Johannes Castner:

want to get rid of consciousness altogether. And I think that is the case. Yeah,

Christian Wilk:

exactly. I mean, that is just, just a robot. Yeah. Without thinking, when you don't need a copy of yourself, well, you need a accepted, you have some special skills, dexterity or way of thinking which is required for that particular job.

Johannes Castner:

Yeah. Maybe even your ethical sense. Right. So you want to take, I think you do want to behave as yourself in terms of your values. Whenever your values are expressed in behavior. You want the robot to behave as if it had your value, but you don't need it to actually have them. That's what I would argue, and I think that in fact, there is no business case for building consciousness into robots. I cannot see one, because behaviorally I believe we could be identical to things that do not have it. And since everything that we can think of a business cases are all behavioral, well, they're not ex, it's not, there's no business cases.

Christian Wilk:

Well, Ray Kurzweil makes some point, for example, I mean, for these cases, for example, he talks about, uh, replicating the deceased, I mean talking about his father who passed away. And he recreated a software agent to talk with him by compiling all the, uh, outputs. His father created books, whatever, and had this software agent, a neural network, learn from that. And now he can interact with it like he would parent, he just talk to his father. Which reminds me of a Black Mirror episode, which is exactly along that line. I mean, uh, be right back where someone recreates. Uh, just re a deceased, uh, person who died in a car accident to be able to live on with them.

Copy of Ash Starmer:

For express suicidal thoughts or self-harm. Yeah.

Martha:

Well, you want, you, are you?

Copy of Ash Starmer:

That's another difficult one to be honest with.

Martha:

You are just a few ripples of you. There's no history to you. You're just a performance of stuff that he performed without thinking and it's not enough.

Copy of Ash Starmer:

Come on, I aim to, please

Martha:

aim to jump. Just do it.

Copy of Ash Starmer:

Okay? If you're absolutely sure.

Martha:

See, Ash would've been scared. He wouldn't have just slept off. He wouldn't have been crying. He would've been, oh,

Copy of Ash Starmer:

oh, oh God. Nope. Please, I don't want to do it. Please don't make me do it.

Martha:

No, that's not fair.

Copy of Ash Starmer:

No, I'm, I'm frightened. Dar, please. I don't, don't mean I don't wanna die. God fair. I'm frightened. I don't wanna die. Don't.

Christian Wilk:

Scary from my point of view, because it just points to me that someone cannot let go and doesn't accept the most fundamental basics of reality that life starts with birth and ends with death. And so, um, that's another aspect of trans transhumanism or singularity. I mean this wish for immortality, which of course is one of the major driver behind this, uh, the proponents of, uh, singularity, the wish to live forever. And I mean, you were mentioning that you can't imagine or you would imagine that some people, rich people would not go that that line. But that is actually. The current driver, oh, behind is development for sure, but they might not be in life extending, uh, medicine,

Johannes Castner:

but you don't need to make a copy of it, right? You can just go on as you are, not necessarily make multiple copies of yourself, but, but simply prolong this one experience that you have.

Christian Wilk:

It, you want to be at the same time at the, it's different places, right? So I guess instead of tele telepresence, you, I can't imagine that you want to travel to the moon and at the same time travel to underwater, to the Pacific

Johannes Castner:

if you really were there, right? If you, if you're saying that this cannot be done with a consciousness, then what you would want if, if, if you want to have this sort of simultaneous experience, as you'd said, what you'd want, you'd want a consciousness that is capable of having these experiences all at once, but be in a sense, centralized, right? You don't want. Three different things that are like you having experiences that you're not having. Right. That, that doesn't make sense to me. Why would I want to make a copy of myself that an experience goes off in the world and experiences something? Maybe they can call me on the phone and tell me what it was like, but it's just like my brother. I ca I already have a brother, so I what I would want. Then in that case,

Christian Wilk:

no, that's what, that's not what it wa that's not how it was, uh, meant or described, but someone calls you to tell you how it was. Of course, you would be able to synthesize and, uh, aggregate the experiences in one. I mean, if it, if you would not be able to do it in the right way from the beginning, you would start off with individual copies trying to shepherding them together until you reach that state, what you just described before, where we are able. To act as one, but distributed

Johannes Castner:

and, and have the experience to

Christian Wilk:

have a

Johannes Castner:

simultaneous Yeah. The, the question is, you know, how would you have, and yeah, but

Christian Wilk:

that exactly where I say it, it that is not human-like anymore. I mean, that does not, is does not have anything to do anymore with the feeling we have today because it would transform to a planetary existence or,

Johannes Castner:

but it is so, so unimaginable, right?

Christian Wilk:

Well, it sounds unimaginable, but I mean, it, it's, but it starts just where basically the singularity begin. I'm trying to think a little bit beyond what hap would, would happen after the singularity would happen. Yeah, it's an incredible to imagine, and that was the idea to, to think a little bit beyond, beyond taking the current socioeconomic context as a given, because that usually doesn't change that quickly as, uh, technology advances. I mean, you mentioned that in another comment you made, uh, a couple of days earlier, and of course that's a huge problem that, uh, time it takes to do, uh, to come up with regulatory decisions is much, much slower than technology advances. I mean, the time scales are, uh, I don't think they will ever touch base again together. So they will, politics will always lag behind and, uh, Will just fulfill the, the basic needs, uh, even the basic requirements

Johannes Castner:

they have to, well, even if the, the, well, so that's, that's where I'm doubting this a little bit because I think that once we, you know, 1, 1, 1 way to use this intelligence, this, you know, all of this intelligence that we're building is in politics, right? So to improve the system, to improve essentially the, the political system in a more intelligent way, so as to get to a more collective intelligent behavior. I think that's also on the way, right? This is part of this singularity experience or, or experiments

Christian Wilk:

when we all know that AI started as a idea in the military. And that's of course where it will have the most immediate application, unfortunately, of course. But, uh, it's always been like this. But in politics, yeah, it might be used, but as I said, I don't think that it will, uh, make much difference there. Once we reach the singularity, politics will be basically sidelined because the one country or business or, uh, individual w which will be able to come up with the first, uh, general ai will, will, will be the first and last, so to say. I mean, Nick Bostrom has described this nicely. I mean, the winner takes it all because once you have such a self-improving, uh, I ai, nobody else will be able to catch up with you anymore. So

Johannes Castner:

actually this might be the use case for consciousness because if you endow it with consciousness, then it actually won't be owned, right? Because it will be its own owner. It will effectively. Not be ownable.

Christian Wilk:

Yeah, I agree. Yeah, that's, uh, that's I think another big fallacy that, uh, in the current discussions on ai, there's always the assumption, and I think it's a wrong assumption, that it will be able to control in some way. Obviously, I will develop, and I think the, the most naive one is, uh, the one which Ray Kurzweil describes as, because we will develop with it together. It will share our values and we don't need to worry. I mean, that's, um, um, unbelievably naive, I would say.

Johannes Castner:

We're just speaking about this idea that if, if the, if the first intelligence that can improve itself, it's not human. Uh, it's in AI, essentially. And if it, uh, if it's built by the first person who owns it, who owns it, would basically dominate. Everybody and would run away with it. Or if it can't be owed, because if it is actually has consciousness, then it's a different story. Right? Because then it's basically, it owns itself. It wants something. Who knows what it wants. We don't actually know because it hasn't evolved.

Christian Wilk:

Exactly. It could be like you say, or I mean it could have a consciousness, but could still cooperate with its master or creator, whatever you want to call it. I mean, with its personal God.

Johannes Castner:

So the, the, the problem I have with this is that, um, consciousness. Evolve, co-evolved with As with all of the other things that we want, right? We want something with why, because we are raising, we want to raise, raise children and leave something for them there. And all of these kinds of wants and need that we have, that we can understand evolve in this process. So now if we, if we have ai, which currently is just objective function maximization, right? So, so I don't see how we get from there to actually consciousness with its own wants and need. I don't see a pathway, I don't think there is a direct pathway between anything we're building at the moment and consciousness. So again, I think that's John Searl is right in the sense that we first have to understand how it works causally, right? How consciousness works causally. Because otherwise we'll just have a very rich person, or the first person who invents something like ChatGPT, but let's say a thousand times smarter but not have consciousness. Then I can see it. Yeah, okay. It'll be owned by this person and so on. Interestingly though, up to now, A lot of these systems are either made open source or they are allow, or, or they are basically built for people to use them and then maybe pay a fee to someone who owns it or something like this. At this point, that's kind of been the model, right? So all of these very powerful models have, a lot of them have turned out open source and if the first one isn't, some sub subsequent one will be right. So, and if it's open source, then it's a different thing, right? So we could have a singularity that is open source, right? And I think that's actually maybe the most plausible one, right? And that would mean very different things. Uh, right. So, so what that would mean, again, we, we still don't really know what it means. Maybe we are already in there, right? Because to some degree with our cell phones, we are much smarter than without cell phones. So why does it, why do you need an actual plugin to the.

Christian Wilk:

Well, uh, Ray Kurzweil makes this comparison as well to the smartphone. I don't think it's a valid comparison because the smartphone is basically just a better access, uh, faster access to information. So it's basically your universal global library in your hands. I mean, before 20 years ago, you had to walk to a library, flip through the indexes to find something. Now you just pick up the phone and do it. I mean, it's just another quicker access to information. That's all it is.

Johannes Castner:

But is it because there's no, there's functions that it can do, right? Aside from reading and writing? You can, um, uh, you can use it for, for a lot of things like estimation. How long will it take me from here to go to, you know, central London, for example, if I look this up right now on a cell phone and will tell me that, but that's not the same as going to the li in the, the library would've never been able to tell me that. Right? So in a way, Google Maps is already extending our intelligence.

Christian Wilk:

Dynamic. Okay. It has, yes, I understand the point. I mean that it has been, uh, extended to, to dynamic information, not just static information. Of course, one is available in books, but also dynamically created information. But coming back to what you said before, but if it's conscious, you don't see, uh, how it would, uh, create, um, wishes, how it would come up with purposes and to follow or to implement Well, I don't see it that clearly dualistic either or. I mean, you could have a conscious AI and still be bootstrapped by its owner with some basic, uh, goals to, to pursue in its wo in the world. So you're saying it would've experiences possible, I mean, and in a way possible. So it would've experiences,

Johannes Castner:

like it would feel the cool win.

Christian Wilk:

I would say that it's bootstrapped with, with something, it will pursue like a goal to do this and that, like, uh, Eric, Dr. Uh Kay Drexler described the nano goose scenario. I mean, which of course goes Hayward by, uh, designing, uh, nano bots, which, uh, transform matter into something. And then suddenly, if you follow this gold through until its end, it would transform the whole earth into gray goo. That's of course, uh, a gold ran havoc. But someone more intelligent could come up with a framework of goals, uh, implemented in that ai and maybe up to a certain degree, the AI would not notice where these, uh, wishes or goals would come. I mean, we don't know. I mean, that's really a speculation times

Johannes Castner:

stamp. We know where ours come from, right? Because we, we assume we do. So, so the free will thing, right? Well, we can't choose our own goals, right? That actually is actually in, in some ways, Schopenhauer was, I think the first one who, who, who put this in this way of saying, we can do what we want. So we have free will in that sense, but we can't want what we have to want, but we must, right. So by our program, in a sense, in a sense, our, our free will in that way in the second, you know, if you will, the, the, the, the original reasons we have wishes and goals is in essence really built in. But it, you know, the, the, the, the attention that we can show with our, that we can make up our own goals and wishes is not really true, right? All of us want money. All of us want live longer. We all want to have healthy children. So it seems to be very predictable in that sense. And I think that there are good reasons for that because it's something of our architecture, our nature, it's something in our nature. There must be something true about the nature of these machines that we're building that makes them want to have certain wishes and goals. And I guess we can direct that originally. We can build it in the architecture, ultimately hardwire what they want and what they don't want. Right. So it's not

Christian Wilk:

really this, this, that I doubt very much, but I mean that we can control actually, because that's the, the, the fallacy I think we are having now, that we believe that we could control such an ai. In the sense that we just have to come up with the right ideas of giving it this and this, uh, ethical, uh, rules and some, uh, motivations and so on. And then it will find its way in this world, and it will always be benevolent to us. I mean, I'm, I'm not a proponent of the, of the idea that it will go haywire and will suddenly kill all humanity because of some built in, uh, motivation. But it will be, uh, how do you say that? Um, it will be the death by a thousand needle sticks, bricks. I mean, we think that we build something which is for our own good and at the end it will turn out, but we will remove ourselves from the equation and merge with it, which for some people,

Johannes Castner:

I don't, I don't think that it's controllable in that sense that you just said. I, I don't, I think also that it's uncontrollable, but I think that some original, how to say, some, there must be some original architecture. You know, hopefully we'll have a set of ideas and what, what we should, because evolution in a way, designed our wishes, right? What we want is direct product of, of how we got here. And I think that cannot be, that, that that will also be true of the ai, right? So that what it wants will be a direct pro product, maybe a side effect of, of how it got there. Maybe not necessarily what we wanted, but even, you know, maybe we can program what it wants, but then still we can't control it because what it wants. Maybe, you know, maybe at some point we don't want it to want that anymore. You know, we would like it to want something else. Now we will not be able to go back. Right. So that's, that's one thing. So maybe we can set up it's original. Sort of. Yeah,

Christian Wilk:

exactly. And, and I mean the, the basic premise, I mean, for all these discussions about singularity is always that it will achieve super human intelligence and intelligence explosion. That is means it will be unbeliev unimaginably smarter than every single human there is. So on the that assumption, why would it not recognize our intentions in building, in some, uh, uh, fail safes, et cetera. I mean, and remove that if it thinks it's necessary. So if you assume it's much better, much more intelligent than than us, then of course it will circumvent whatever we build in. Well,

Johannes Castner:

that's interesting because even our intelligence, so up to now, we still have no way to choose what we want. We still want what we want based on our program. Right. And.

Christian Wilk:

Exactly. Exactly. And that I think it's a fallacy to, to think that an AI will be a solution to a problem which we cannot solve ourselves. I mean like poverty, inequality, I mean, to project that onto an AI and see that as the solution for all our problems in society is just diverging the discussion to a, to something.

Johannes Castner:

But you could say we're smarter now because we have calculators. Right? We can calculate a way like we could never be for, right? So this is an extension of ourselves. So ai, why is AI any different from that? Right? So why would AI at any point, uh, actually veer off from this thing that we were building tools, we're basically still building tools. You know, we are, we are pool building species and we started building tools. Now we get more sophisticated at building these tools, but I can't imagine. That we would build, that we would want to really build other conscious beings. That's one thing there, there were also a lot of ethical issues with even wanting to do that. And there might be, you know, once we get to that point, maybe they'll put a cap on that. I, I don't really know, but, um, but I think that we are making ourselves smarter, so we might be in the middle of the, of the singularity and it might be just us because we are getting, you know, calculators and we can say what's, but when Yeah. Libraries and so on. We can always compare to some earlier rendition of making ourselves smarter, which a library is.

Christian Wilk:

Yeah. I rep, I replied to you in that, uh, in, on LinkedIn and I, if I remember what I said, it was, um, yeah, I mean that's exactly the impression on the way to the singularity. I mean, we will get smarter. We will use the tools which are developed on the way towards that. And then the assumption is that this development without us will create. And intelligence or become intelligent more than us. Of course, if it, if there is a certain, uh, ceiling or uh, border limit we cannot overcome, then fine, we will ever stay on that level. That we have perfect tools, which are much smarter than we are, but do not have their own wishes, do not have intel consciousness, and cannot do anything against us. Perfect. I mean, then it will be the perfect tool. And then Ray Kurzwell and all the singularity proponents are all wrong. And in 20 years, we, in 20 years, we could tell them, okay,

Johannes Castner:

actually happen way of, that the singularity might happen I think one of his views, I, as I recently watched one of his talks and you know, I, I remember back in the, in the book as well, it's not that it's necessary that the entire cognitive apparatus or that there is an individual cell, if you will, or, or a conscious being that is an ai, but that basically we, we could be in a, we could just make ourselves in continuously, we might be the thing that makes itself smarter, right? So we, we might be the machine, if you will, that can increase its own intelligence indefinitely. What if we are that, that is also a singularity? That is an option. That is a potential, right?

Christian Wilk:

Well, but I think the outcome will be similar because once this competition starts, there will be always one who wants to be the first one. Can create his intelligence explosion and it will be a race for like the race for the nuclear bomb. And it's a race for the ai. And I mean, Nick Bostrom and several of his colleagues described it very well, how this would pan out. I mean, there, there's no way to cap this or to stop it. I mean, we're already on that tra trajectory and nothing except, uh, major catastrophe, uh, meteor hitting Earth will stop it. So what the best you can do is to raise awareness about it, to inform people what is coming, what might come to prepare for it to start thinking. That identities are completely changing with people enhancing themselves, even though we might not merge with ai, but people with night vision built in their eyes, or, uh, the ability to hear 500 meter wide instead of just a regular couple of meters. I mean, that will transform society al also already completely upside down. And with the current socioeconomic context, I think the, uh, it will, the pressure will become even, uh, increase on people to, to comply with it.

Johannes Castner:

But don't you think that really that, that, that society has already unimaginably changed since, say the 1970s? If you look at in the internet, even the way that we're speaking right now, everything, you know, so, so people always say yes, the future will be radically different from the past and they will probably be right. But what that means is always, you know, in the future, it's always means something more than it means now. But, but right now what it means is basically we've already seen in a massive transformation, um, from

Christian Wilk:

where we can work. How, how many, I guess you used the 1970s because that's when you came into this world. Yes. Because the massive changes are happening since the 1830s, since the industrialization started. I mean, the pays accelerated since then. I mean, in, in, in that regard, I, I, I completely agree with Kurzweil that if you look at history, the big history you see a long time as if nothing happens. And then suddenly with the advent of machinery, mechanics, the steam engine, things start to accelerate. And the more we. Go through his stages of, uh, information processing stages. I mean, he describes it six stages. I mean, from Lang when language was invented before language, nobody could speak to each other. Then after language spoken, language, written language to improve bureaucracy, I mean, written language was basically invented for accounting in the Sumerian culture. So with all these inventions, things start to pay accelerate. I mean, the printing press accelerated the diff diffusion of knowledge, but only since we, uh, actually started to digitize everything because that seems to be the universal paradigm to be able to, uh, to represent anything. Because you, with a computer, you can simulate biology, you can simulate mechanics, you can simulate anything. And that's of course, because of his universal simulation machine of a computer. The idea came up, why don't we just simulate a brain? Then the idea of a singularity, uh, arose. So, I mean, of course it's gradual, but it's also accelerating, and that acceleration doesn't look like it's going to stop. I mean, for me, then the next thing on, next step on the horizon was really synthetic biology, which looks like, uh, computers in the 1980s and, and looking at that for another couple of 30, 40 years. I mean, that will transform the world even more when computers has done it because if you. Mingling or, uh, reality matter becomes marble at the, uh, tiniest thing, particles that of

Johannes Castner:

course. But to go back to John Searl I mean, what he points out and I think is correct, simulation is not the same as causal reproduction, right? If, if you simulate the flight of an airplane on the computer, you don't have something that flies, you don't have anything flying. You just have a simulation of it. And so those two things are radically different.

Christian Wilk:

Yeah. But I mean, planes are built nowadays by simulating their behavior by mechanical features and characteristics before we're actually become alive. Of course, I mean, if you talk about ontologically, uh, simulating a flight, of course it doesn't fly, but I mean, you, you, you simulate the mechanical characteristic of a material of a wing before you actually build the plane and, you know, before it actually exists that it will fly in that particular way. I mean, nowadays testing something is basically to, uh, verify, but the simulation was correct and not really testing in the sense that, ah, if it doesn't work this way, we, we throw it away and we start from scratch again. No, I mean, that's not how mechanical construction works nowadays. I mean, you simulate almost everything and you build it and then, you know, okay, to 99% it will work like it was supposed to work.

Johannes Castner:

Yeah. But usually when we, when we do simulations, We know exactly what it is we, we want toward the behavior. And usually it's a behavioral thing. And, and, and we, we, we, we know, you know, we know what we get it right when we get it right. And we don't, we know when we don't have it. Right. Right. Mean if we, if we have something like consciousness where we can't even tell that it, uh, you know, then exists in the other thing because we have no way to measure it at the moment or no way to, to, to formally establish that it exists in, in, in another person even, um, then, you know, the, the simulation will be useless. Right. Because we have to first actually, so in a way, what, what John Searl I think is saying is that you have to first in, in order to get to the stage where we design new plane based on simulation, we first had to understand how, how, you know, reproduce flight causally and understand how it causally works.

Christian Wilk:

Yeah. But we didn't build a, a copy of a bird, I mean, to fly. That's also, we just. Copy the functionality of it. I mean, and that sense copying the functionality of a brain might be as good as it gets for us, or as it needs to be for us to attribute consciousness to it. Yeah. I mean, we come in full circle back to where we started.

Johannes Castner:

If we do manage to fully create, recreate the function of the brain as it is, then we will have created consciousness. But if we only simulate the brain, we will only have simulated something and we don't really know, I don't think we have created it at all. Simulation is not identical to creation. I think that this step, or from simulation integration, I think is, is one that's also missing in the paper. I think that is, you know, in, in a way that, you know, when you simulate something, you don't have. Citizens who believe themselves to be, you know, part of a world or something like this.

Christian Wilk:

Well, you were just saying that if you simulate, uh, in a simulation, a citizen, they don't believe to be in a, but I mean, they don't know. And that's of course the tricky part of being in a simulation. If everything is simulated for you, that's your world. You don't know that another world exists. You can't question something which you don't know about. So if you are in a simulation, the simulation is the world for.

Johannes Castner:

Absolutely. But if you are in a simulation and you have no consciousness, you don't know anything, experience anything, and you don't care about the world or whether, no, I mean,

Christian Wilk:

to be conscious in a simi, I mean, it doesn't contradict itself is to be conscious in

Johannes Castner:

No. In fact, that's right. That's right. It's a possibility that we are living in one, in fact, and, and then that, that's absolutely true. Um, I, I see that as well, that, that we, we could be, we could easily be in a, in a simulation, but, but then the meaning of what it means to be real or simulated, it's already kind of Yeah. You know, it's, it's, it's a philosophical question really. All right. It's not really, um,

Christian Wilk:

well, at the end, it doesn't matter really. It is, as you say, a philosophical question in the sense that the answer doesn't make any difference because, uh, what, what is gained by knowing it? I mean, would it change your life? Would it change your wish to have children and care for them? If you are in a simulation and you know it, then okay, so that's it. Then you move on with the life you have. So, because you can't change it,

Johannes Castner:

I don't even see the difference also. So yeah, in some ways there is actually even no ontological real difference between a simulation in reality, because maybe reality is a simulation. Okay, but the what is it really mean, right? So the question of what is the, the content of this, of this sentence is actually even, you know, I question that even, right? Because yeah, so the Big bang, um, you know, it might be a simulation. The world might be a simulation, but. You know, I don't see much epistemic content actually, except that you know, that we can go from this world into an actual assimilation and that there is a difference there somehow. You know, that is true.

Christian Wilk:

We are in a simulation exists since the beginning of humanity. I mean, the oldest religion, Hinduism, the Vidos, uh, talked about, uh, that already, and that was 5,000 years ago. So the idea is not really new. Uh, what I described in the paper I wrote is basically only hinting towards how could we identify or see or have indications that we are indeed living in a sim uh, simulation. That is once we start to create simulations based on these uploaded, um, uh, substance independent minds. If we create simulations, we, we let them run and suddenly we see that in another simulation we run, they create themselves a simulation. Then of course, the recursive cycle starts, and then of course, there's no point in denying that we could also live in one. That's basically what I found, uh, to be the most, uh, uh, uh, obvious indication to be in a simulation. If once we start simulating societies, then those, uh, societies would trigger their own simulation because there's no way, usually within a simulation to find out for sure that you are in a simulation. I mean, if you look at some suggestions to look at the space time structure, I mean, that could be part of the simulation. Why should some fluctuations in the space time structure be an indication for

Johannes Castner:

Well, yeah, so we know that. Yeah. So we know that evolution happened, right? So we were pretty clear on that, and we could just say, okay, evolution equals simulation. Yeah. I mean, same thing, right? It's a process that improves itself. It just

Christian Wilk:

came up. The

Johannes Castner:

type of, it's a type of

Christian Wilk:

simulation, the type of program. It just came across a book written by a German, uh, guy who wrote exactly that. Uh, his hypothesis is that we are in a simulation and that the purpose of the simulation is, uh, to create an artificial intelligence, the best one there could be. So we are basically the building blocks of something that creates artificial intelligence for the ones who created our simulation. That

Johannes Castner:

is funny. I I, I, I definitely, I've heard this kind of thing before that our purpose on this earth is to create ai, but I feel that that is also a sort of a symptom of our time, right. Because right now we're building ai, it's on everyone's lip, you know? So Exactly. Just like, um, at any given time. Yes. And people would always think that what they're working on currently is probably the thing that they're, yes. Um,

Christian Wilk:

Yes, correct. 200 years ago, the world was modeled after the mechanical, uh, way things worked and steam engines a hundred years later, it was the hydraulic, uh, machines and so on. So, yeah. Absolutely. Right.

Johannes Castner:

Could you just tell the audience something that, that you would think that they should take away, um, from this conversation? And maybe also tell them how they can keep in contact with you and, and find out about your research and your work. Um, if you could just do that before we go, that would be great.

Christian Wilk:

Well, what I would like to give you, uh, as a summary, conclusion, I would just, uh, encourage you, enjoy the life as you have it now, because it will change dramatically. Enjoy biological life, enjoy nature. Go out, enjoys it. Must possible because digitization will continue. We can't stop it. Nobody will stop it. And yeah, can, you can always reach me by, uh, via LinkedIn or Facebook. Just search for my name. Thank you. And I hope you enjoyed the discussion.

Johannes Castner:

This show is published every Wednesday at 10:00 AM in the

United Kingdom, 5:

00 AM on the East Coast, 2:00 AM on the West coast. Please leave us comments and tell us what you like the direction of the show. If you, if you like the direction, if you don't like the direction, if you want to see something in particular on the show, please tell us. Subscribe if you haven't already, so that you can keep current with the show and give us a thumbs up on things that you like. Next week I will be speaking with Claudia Pagliari, who is the director of Global eHealth at the University of Ellenboro Scotland. I will be speaking with her about some topics that are related to this show, such as the brain to computer interfaces, but I will speak more broadly about advances in the medical area and in the area of technology and how they combine.

Claudia Pagliari:

Historically, there've been a, there's been a slight mismatch between innovation in medicine and innovation in technology space. Um, uh, and when I say technology, I mean more computer-based technologies. Um, but they are gradually coming together.