Utopias, Dystopias and Today's Technology

AI's Trajectory: From '80s Beginnings to the GPT Era and Beyond

June 07, 2023 Johannes Castner Season 1
Utopias, Dystopias and Today's Technology
AI's Trajectory: From '80s Beginnings to the GPT Era and Beyond
Show Notes Transcript

Join our host Johannes and AI expert Neville Newey as they explore the intricate world of Artificial Intelligence (AI), quantum computing, and consciousness. This compelling discussion ventures into the origins of AI, its evolution from the AI Winter to the AI Spring, and the potential it holds for the future of technology and society. From early programming experiences to cloud computing, the conversation covers the impact of Moore's Law, the rise of language models, and the future of personal computing. But it doesn't stop there. The duo delves deep into the philosophy of consciousness and sentience, scrutinizing these concepts in humans, animals, and AI. The episode culminates with debates around AI regulation, its potential misuse, and its future role in transforming work and education. Get ready for a mind-bending conversation that will challenge your understanding of AI and consciousness. Don't miss this insightful episode of Utopias, Dystopias and Today's Technology. Tune in, share with your tech-savvy friends, and join the conversation!

Johannes Castner:

Hello and welcome. My name is Johannes and I am the host of this show. Today I'm here with Neville Nui and we will be talking a little bit about ai, how it has changed over the years, um, the, the culture and what data science means and what it has meant traditionally, if there's any changes, we'll talk a little bit about some. Some myths in ai, uh, will address some myths in ai, some hype. And yes, let's get started right away. Um, Neville, I wanna introduce, I wanna introduce you. Um, Neville is a serial entrepreneur. He has, uh, built a company called Milo that was acquired by eBay. And in 2015, he was the one of the, uh, uh, uh, director of data science at, uh, at eBay. And he hired me there as a senior data scientist. So that is, uh, our personal history as well. Um, just to let you know about that. Welcome Neville. It's great to see you. And, uh, as always, uh, I hope it'll be a great discussion. And yeah. So let me, let me just start out, you know, maybe you can tell the, um, tell our, our. Uh, viewers and listeners a little bit more about your background in terms of how far it really goes back when, when you started working with AI and where you were and, and I think you were in Botswana, is that right? Anyway, yeah, go, go ahead and tell us a little bit about your background, uh, and, you know, your, your own mission or your own experience with AI over the years.

Neville Newey:

Sure. Thanks Johannes, and, uh, good to be here. Um, I'll, I'll just start off with a, a small clarification. I just wanted to clarify a little bit around Myla because, um, your words give me more credit than I, than I deserve. Um, I didn't actually found the company I, I I joined when it was already going. Um, um, however, I did make, um, uh, quite a big contribution from that point on. Um, mm-hmm. So, to your question, going back to my history with ai, well, it, I'd say it, it started a very long time ago before I really even knew what AI was. And I would say it started when I got my first computer. I was, um, I think I, I must have been about 15 years old or so, and this was a Sinclair ZX81. I, I don't know if you're familiar with that, um, computer, but, um, it was made by Sinclair Corporation, I believe. They were a British company and a friend of mine, and this is before the internet. So a friend of mine, um, used to get these, uh, electronic magazines from the uk. This was in South Africa, and in it was, um, an advertisement, um, in the classified section at the back for, um, this amazing new first sort of home computer that, that came out. And it was about 300 pounds. Um, at that time it came with one kilobytes of memory. So just over a thousand bytes of, of memory. What, what

Johannes Castner:

was the year roughly

Neville Newey:

that would've been around 1980. Two, I believe around 81 82. Um, you can look up the, the, the, mm-hmm. XX 81. Um, it was actually the second one in the series because they had a XX 80 before that. In fact, 81 probably referred to the year that it came out. Ah, I see. Yeah. Okay. Anyway, he ordered this computer and he told me he was very, very excited and he said, this thing is being shipped to me from the uk. Um, in those days, you know, it took about six weeks for things to come. I did not even know what a computer was. Um, at, at, at that point. I, I knew what a calculator was because my father had a HP calculator. Mm-hmm. And so I assumed it was some sort of a calculator type thing. Um, anyway, um, the day it arrived, um, he invited me over to his house and I went there and we were very excited. Um, and we unpacked it. Um, you had to plug it into your TV and, um, it had a, it basically, it had, the unit was really a keyboard and it had the, the chip inside it. Um, and it, it's, it, it, the operating system was not much to speak of. It was basically a basic environment as in basic programming language. So when you was up, you, you went into a prompt, inside a basic interpreter, and then you could start typing in programs. So in fact, the, the, the very first we read the manual and we sort of got a, got an idea about, about, um, what, what the different, um, sort of keywords in the basic language meant. And the very first program we created was a chatbot. Oh, wow. So we, we, it was extremely simple. It was no more than about 15 lines of code. And it was something like, print, hello, I am a XX 81 computer, what is your name? And then get inputs and then, um, you know, how are you, blah. And um, and we didn't realize it at the time. Um, but, but, but that, that, that was essentially an extremely, extremely primitive, um, chatbot. But that, that little episode got me thinking immediately from, from, from the get-go about the possibility of, of, um, sort of, um, synthesizing human human thoughts and, and, and, and human logic and, and getting machines to behave in, in some sense, like human. And that was an instant fascination for me. Which, um, follows me, you know, to this day. And that was what, probably 36 years ago, something like that. Um, ah, so, so that's sort of, um, the very far back history. And maybe as we go along, I'll throw in other bits and pieces.

Johannes Castner:

Have you seen that movie? There was, uh, actually not too much later than that, called Mac, uh, uh, 20 Minutes into the Future, Max's headroom because, um, that was, I think also extremely early and extremely prescient.

Max Headroom:

Still alive out there. Good. This is Max, max, max. Max. Max. Max. Max. Max. Max. Mark. Max. Max. Max. Hiram.

Johannes Castner:

Uh, I don't know if you're, have you, have you seen it? I have not. Yeah. Okay. Was, I think it was like 1985. So it was my first. It was my first encounter, uh, although I didn't see it in 1985, to be clear, I saw it in the nineties, like early nineties or something like this, because I wasn't, you know, I was nowhere, uh, uh, I wasn't even really a functioning human being in the sense, you know, I, I, the, the, the, you know, at the, in the eighties. But, um, in the nineties, I saw this movie from the 1980s, from the 1950, uh, 85, so called 20 minutes into the Future. It's actually incredible because there is, uh, somebody making a digital twin of a reporter whom they wanted to disappear, you know? So they wanted to disappear this reporter. And so they, but they didn't want his show to go away, so they, they, they tried to generate a digital twin of him to keep him alive. Uh, it's a, it's a really wild thing. And later on, uh, he became, uh, actually, uh, this character became a spokesperson for Coca-Cola, which is a, actually, the whole thing is quite co punk rock.

Max Headroom:

Is this a private party or crush? So no. No, is irresistible says is more modern than you said the P word. What I wanna know is if you are drinking coke, Who's drinking Pepsi? If you can't beat it, catch the wave. Coco, thanks.

Johannes Castner:

Um, but the whole thing from, from, from beginning, it's, it's quite, it's quite fun and interesting to watch. But anyway, yeah. So then what, what happened? So what did you do with this knowledge and where did you go from there? So when did you actually build something? I understand, you know, that you, you got into statistics and into machine learning, I suppose. When, when did that happen and, and what, how much of a journey was it from, from the 1981 experiment?

Neville Newey:

Well, from the, from that, um, experiment. Um, I'm, I, I convinced my father that I also needed to have a XX 81. So, um, eventually, um, a few months later, I also got a XX 81 and I, I started to, to, to code on it. And, um, at that time, you know, these magazines that we used to get, they used to, they used to get, um, programs in the magazine and, and you'd have to type them in. So the magazine would come with a printed basic program and, and then you could type it in, um, and run it on your own machine. And, and shortly, very quickly, um, pretty sophisticated programs started coming out. We're talking relative here because remember, this machine only had one kilobytes of memory, and believe it or not, um, drafts programs, um, there was even a chess program came out. Um, so AI sort of. Um, games started coming out very early and, um, you could, you could, um, save your programs to, to a tape. So you plugged in a tape recorder, those old cassette tapes, um, that, you know, uh, that's how you saved the program. Um, I, I started developing, you know, I started, um, developing my own programs and I started developing, um, a draft program and, and a few other games and so on. Um, anyway, it kept up my interest in, in, in this, um, you could also, by the way, um, the, the way they managed to, to get more into that one kilobyte was you could actually, um, type in machine code itself. So not only basic, there was a way to sort of type in strings that were machine code and there was, um, a keyword in the basic. Interpreter that allowed you to load that directly into memory and then forced the computer to run the actual machine code. There were no debuggers or anything like that, so it was, you know, extremely brittle process. Eventually, I, I upgraded to a Vic 20, what was called a VIC 20, which was a commodore machine. And, and, and that had three and a half K and the luxury of going from one K to three and a half K, it was unbelievable. You know, suddenly I had three and a half times as much memory, um, and, um, it had colors. The ZX81 did not have any colors, so Vic 20 actually as color and, um, uh, you, you could write more memory. And then I, I, I managed to convince my father to buy me a 16 megabytes. A memory add-on module that you plugged into the back of the Vic 20. So now I had 19 and a half K and that, that seemed, it seemed to me at that time that I would never use all of that. Just like,

Johannes Castner:

just for perspective, how much, how much memory does your brand New Mac have right now?

Neville Newey:

Well, I, I, I, I took the smaller option, which has 16 gigabytes. My, um, my cell phone right next to me has 128 gigabytes. So, yeah. So it is, you know, Moore's law and all of these things are, are sort of pretty, pretty crazy. And, um, you know, perhaps we'll start touch on it a little bit in this, in this podcast, but, you know, quantum computing is going to be the next sort of big wave. Um, but I don't wanna jump too

Johannes Castner:

far. Well, have you ever, you done any fors into quantum computing? Actually, do you, do you know, have you worked in that field?

Neville Newey:

No, I, I have not. Um, I've sort of been following it and, um, again, I'm fascinated by it and, and, um, you know, they're going to have to be a different way of thinking of programming, um, because you're now, you know, are, are, are able to, to compute things simultaneously. Um, so, so it's going to require a, a huge paradigm shift in the way that we think and, you know, the way that programs are executed in algorithms, really the algorithms are going to be completely new. I'm still a little bit skeptical that we'll ever actually, um, get there. Um, you, you mentioned, you know, hype early on, there is quite a bit of hype surrounding that there are companies who have claimed to, to already have built. Um, and or indeed sell, you can indeed buy a quantum computer today. But, um, it's is from my understanding at least, um, it's extremely limited in, in, in, in what you can do. And it's not really truly quantum, um, in the sense that it's running purely on qubits. Um, I believe qubits have been created, single ones and maybe even more true qubits, but, To the extent that the technology is available to have a purely, um, quantum MacBook on your lap is, is I think, is very, very far away. And I'm a little bit skeptical that we'll ever get there. Interesting. Um,

Johannes Castner:

but let, let, let's go a little bit back to what, what, you know, this, all of this stuff just to get, you know, um, recap on this eighties stuff that was all before, or was it during the AI winter? And could you say a little bit about the AI winter? What is this term? What does it mean? Why was it there and why was it overcome? Why is there now no more AI winters?

Neville Newey:

Well, I, I'm, I'm not sure exactly which, which era you're, you're referring to, but, um, if I think of the term, um, it certainly seems applicable to, to definitely that period. So even, you know, before the eighties, of course, there were. People at m i t and so on who were, who were doing research into ai. Um, and, and mostly in, in sort of symbolic processing. Um, I, I, I think, you know, Marvin Minsky and others were, were, were experimenting, but it, it was largely, it was largely symbolic processing using, you know, programming languages such as lisp and, and so on. Um, and then really, I, I, I think, you know, I know we're going to get onto sort of chat GTP and those type of things later, but one of the, one of the sort of milestones came a few years later, I think, when neural networks really sort of came onto the scene and, and in my recollection that that would've been, um, probab. I don't know exactly when the first concept of a neural network was, was proposed, but, but it, it, it was probably around that same period, eighties. Um, maybe even before that. Um, but I do remember, um, so subsequently I went to university, um, studied computer science. Um, I became extremely, um, disillusioned by academia. So I quit, um, quite early on. I think you had a similar experience in your life. And, um, anyway, I ended up, I ended up working at, um, in South Africa, what is called the C S I R, which is was the, um, council for Scientific and Industrial Research. Um, they actually had sponsored me to go to university. So the deal was, um, part of them paying my academic fees was that I had to go and work for them afterwards. And at that time, uh, there was a lot of hype coming out about neural networks. And, um, I got placed in a team of engineers. These were electrical engineers, actually, they were actually interesting groups studying, um, underwater acoustics. Um, and I was the only, um, the only, uh, computer scientist in the group of about 15 or so of these electrical engineers. So they assigned all the programming to me and they, they had heard of, of, of neural networks and they were intrigued by this idea. So they said, okay, code a neural network to solve our problems. So, um, what year was that roughly? Again, just to keep that, that would've been, that would've been about, um, 19, let's see, 87, 88, still around that time. Mm-hmm. Yeah. But by that time we, we did now have personal computers or, or what I would call the standard personal computer, which was originally based on the I B M computer, but you know, the ones with the intel. Intel chips. Um, and, uh, I, I think Windows version one or two, or even Windows version three was out then. Um, anyway, um, I started coding, um, neural networks, you know, using, um, coding. There were no tools available. Python probably didn't even exist yet. Um, so everything in those days, in those days, I used to code everything in c um, so, so, and there were no sort of packages or tools that you could use off the shelf, so you had to code everything from scratch. So I coded up this neural network and, um, for the life of me, I could not get. Get this thing to converge. The, the training works would never converge. It just appeared to do nothing. Um, and it took me, and, and it always, it always perplexed me. And it was only years later when I became more involved in the actual mathematics of these, of these things that I, that I suddenly realized why my damn neural network had never, ever conversed. And it's because I, I was a hacker. And, um, you'll see any tutorial on, on, on, on neural networks will say, um, set the initial weights to, to some random number, close to zero. So, so me being a lazy hacker said to myself, well, zero is a, is a random number, close to zero zero.

Johannes Castner:

I can see the problem there.

Neville Newey:

And then years later I was like, oh, how stupid can you be? You know? So there's a big lesson learned there. You know, it's good to be a hacker, but, but, but don't try to be too smart.

Johannes Castner:

It helps to look up and, and read a little bit more about how stuff works, I suppose. Yeah, that's interesting. So this was also during the, the AI winter, right? So go back to that like, yeah, so

Neville Newey:

yeah, so getting back to your winter question. So at that time, you know, so, um, they were smarter people than me by far, that didn't set them to zero and did have neural networks. That kind of worked. But, um, I believe somebody proved that. Um, and I don't remember who it was now you probably know who came out with this proof that neural networks, at least as they were configured at that time, were not even capable of, of solving the exor problem.

Victor Lavrenko:

Let's look at a simple example. Remember, uh, the, uh, when the net, when neural nets first died, they died because, uh, Minsky and Papert proved that they can't solve the xr. Okay? So that only happens when you have one neuron, right? If you, if you get to have more than one neuron, uh, then you can solve the XR problem. So, uh, remember the xor problem. Uh, you've got, uh, you've got zero and one on the, uh, for the first attribute, zero and one for the second attribute, and you're looking for the exclusive or when only one of them is on. Uh, that's a positive class, and when either both are on or both are off, that's a negative class. So there's no way to solve this with just a single unit. There's no way to draw a hyperplane to separate the points, uh, but the following network. Very simple network, uh, will solve the problem.

Neville Newey:

So this was a, this, this was a major setback and, and, and, and all of the hype at that time about how neural networks were gonna solve all humanities problems and take over the world and put people out of jobs. Sound familiar? There was a lot of that hype at that time. But then this, this, this, I believe they probably published a paper on it came out and it really caused everybody to say, oh my God. Alright, forget about neural networks. That was one thing that contributed to it. The other thing was that. The computing power at that time. You know, um, I, I, I think, you know, at, at the time when I was there working on this, you still had that maximum of six 40 k I dunno where the number of six 40 K came from. It sounds like it's 10 times 64 K, but that, that was more or less the maximum, um, uh, memory that you could use at, at that time. Um, so, so there was limited memory. There was limited disc space. There was limited, um, CPU power to, to, to run the training for, for a long time. So really neural networks were kind of ahead of their time for the hardware capabilities on that, I think also contributed to the, to the sort of winter and then, um, uh, again, I'm not really familiar with that term, but, but I, I would say then, Probably the spring, AI spring would probably be the sort of revival. Um, and, and I, I, you know, I mean, we're talking now probably in the two thousands, maybe 15 years ago when, um, you know, big companies like Google and, and others, um, started, you know, doing a lot of their own research and computing power had now become, you know, leaps, you know, leaps and bounds ahead of, of what we had, as we've already mentioned. And, and, and, and new neural architectures started coming out and, and, and we started seeing results. And so to me that would sort of be the spring that got us out of that winter.

Johannes Castner:

And then you, you basically then at some point you went to America, right? At some point you went to the Silicon Valley because, uh, Why, why did you go to the Silicon Valley? What, uh, what made you go because you hit a roadblock or No,

Neville Newey:

no, it was actually, um, it was actually because of my son. So I have a son who, um, has autism and, um, in, in the Bay area. Um, there, there was a lot of research going on, um, ar around autism, um, particularly at uc, Davis at the time. Um, and, and the, the schools in the Bay Area, my son was three or four years old at the time. Um, and the schools in the Bay Area, you know, that was a time when, um, you know, the hype was that autism was really, really expanding. Um, which, um, I'm, I'm, I'm skeptical of, but that's, that's, that's a whole nother topic. But anyway, the result of it was that the schools in the Bay Area were extremely aware of autism and each school, many schools, if not, if not all of them, had specific programs in their school to deal with autistic kids, which wasn't available on the East Coast. So that's, so that's why, that's why I, I, I went, took him and went to the, to the West coast and to, to Silicon Valley. So it wasn't that I was following, you know, AI or anything like that, although that was a nice side bonus Oh, it's. It was just, um, purely accidental.

Johannes Castner:

That's a, that's a good coincidence, I suppose. And then you had this, a massive career because of it. But, so let's say, okay, now if we can extrapolate and you say Moor's Law and all of these things, you know, let, let's talk a bit about that. Can you, can you give our audience a little introduction to Moores Law?

Neville Newey:

Well, the original Moores Law, which, um, was, uh, I believe his name was Arthur Moore. Was it, I, I, I, I can't remember. I think he worked for Intel. Um, and he, the, the, the way, it's not even really a law, it was more an observation. And at the time, um, that it first came out, I believe, and correct me if I'm wrong, but I believe he said that the number of, of, um, Of, uh, um, transistors in a mi in a microchip doubled every 18 months. So that was approximately, that was the original Moore's Law. Um, so that's, that's sort of the very strict, um, meaning of it. I, I guess, but, um, but you know, it, it very quickly came to represent computing technology in general. So not only did our chips, um, uh, double in, in, in their sort of internal capabilities, um, the memory also doubled, um, uh, sometimes more than doubled. Um, but basically it increased at a very rapid rate. Um, also the speed at which the chip was clocked, um, got higher and higher and higher. Um, and then, and then we, we, you know, the next sort of phase of it was multiprocessors. Um, and, and all of these technologies together have really caused this increase in our, you know, in our, in our capabilities to, to continue, you know, to continue going upwards ever upwards. Um,

Johannes Castner:

is there a natural limit? There was a time. Is there, is there a ceiling? I,

Neville Newey:

yep. Well, I, I believe yes, there are, there are physical limits. I mean, once, once you get down to the size of an atom, you can't really go any smaller, you know? Um, so, so there is, there is a physical limit. So there's, you know, there's obviously the physical limits of, of, of, of the speed, which, which electromagnetic signals can, can, can, uh, be moved around. So there are, there are physical limits, but there, you know, the human mind is very ingenious in their ways to sort of. Um, continue this trend. I, I think it's gonna carry on for a long time. But interesting that you had mentioned that, because again, going back to around the nineties, um, you know, when you bought a new computer, you really, you, you, you talked about the chip speed. It was always, oh, I'm gonna, you know, and, and there was a period in the nineties where, you know, it, it, I remember when the first one mega Hertz machine came out and it was amazing. You know, you could now have moving video on your screen, you know, and then, and then it was, it, you know, and then it was 1.5 megahertz and then two mega, it's 3, 4, 5, 6, it carried on. Now I don't even know what the chip speed is in my, in, in my laptop. I don't, I don't care. I don't know. Probably You don't know either. And it's because those have more or less hit, um, a physical limit. And, and, and, You know, capabilities are expanding in, in other ways. You know, I mean, the sort of latest trend is GPU processes. So my new laptop that I'm speaking on right now, I just got a few, few weeks ago, has something like 16 CPUs and 12 gps, something like that. Yes, yes.

Johannes Castner:

And then it's also, the thing is that you move a lot of the computation, it's the cloud, right? This is also a new way to get around the ru the, the Moore law in a sense, right? You can run massive computations in the Google Cloud and Amazon Cloud pretty quickly. That,

Neville Newey:

that, that's right. So, yeah, and for these, these, these very big, uh, large language models that are all the big trend right now. Yeah. They're, they're, they're basically computing them on, you know, tens of thousands of machines in the cloud, you know? Well, that's

Johannes Castner:

what, what makes LLama very interesting, right? Because you can have this. You, you for $600, I think you can train a model on your laptop that does quite well compared to these very massively trained models. How does that work and why does that work?

Neville Newey:

Well, I, I, I think I would, um, agree to disagree with you that they work quite well because I, I, I, I have been playing around with, with a lot of different models and, um, um, in, in, in my opinion, really open AI's models are actually superior to to, to anything else out there, um, by quite a lot. So yes, it is getting there where you could train your own models. Um, but, uh, in my opinion, they're quite far away. They're quite far away from, from the pre-trained models that, that open AI has as well. Um, hugging face is, of course, a, a big player that has lots of pre-train models that you can, you can download at or you can use via api. Um, And, and, and I commend them for that. And it's a great resource. But the performance is, is, is, um, in my opinion, um, not, not even up to open AI's models. Um, LLama I haven't specifically looked at that, but, but to your point and to, you know, to the whole Moore's law thing, um, within a decade we are going to be able to train models comparable to, to G P T 3.5 or G PT four from an open air on our laptops. You know, if you follow, if Moore's Law, um, holds. And so, and it is holding and it's been predicted to not hold many, many times, but as, but as I said, we've found other ways to, to, to make it continue. So if it does continue within a decade, Um, we will be able to train, you know, models with the same power on, on our laptop. So that brings me kind

Johannes Castner:

to this question like I would have for you in terms, you know, Ray Kurtz claims that there's a strong relationship between intelligence and Morris Law, right? So if, if we have this massive compute increase, we can then build more and more intelligent things on top of that. Do you agree with that? Is that even intelligent? So is, is GT four or any of them, is that intelligence, would you, would you describe that as a, a type of intelligence and, and is that correlated with, with MO law? Can we just make things arbitrarily intelligent over time?

Neville Newey:

Um, the, my short answer would be no, but let me give you, let me give you the long answer. So, and, and you and I have spoken about this before. Informally. And, and so I, in my opinion, um, the entire community and, and, and the public in general always forget what the A means in ai. We always concentrate on the AI and, and, and, and we forget about the A, right. So, um, I, I, to me, to me, the goal has never been to create intelligence. After all. We cannot even define intelligence. There's no, there's no like, definitive definition of what, like what is, what is human intelligence. Yeah. But that's true

Johannes Castner:

of computers to it. You can say, what's a computer? We built some, we, we, we built some, but we don't really know how to define them. It's similar to the state where we have a state, but defined state. I mean, that's impossible, right? So, Carl Popper says, we shouldn't even engage in the exercise of defining things too seriously. So you define something for a particular argument and then for the rest of the conversations, that definition doesn't even have any applicability. You just keep it within each argument. Right? So just like define it on the fly, then you drop the definition. That's sort of his attitude toward it. And I kind of agree with that because you know, we, we, we know we can build a state, but can you really define what a state is? Is it even point, is there a point to that?

Neville Newey:

Well, I think, well, so you, you said we can't even define what a computer is. And I, I would, I would say that, um, you can, there is actually a formal definition of what a computer is and, and, um, it, um, it, it's, you know, it's, it's defined as, as something which, um, was defined abstractly as, you know, a finite state machine of sorts, which has this infinite tape. Um, where you can have different instructions on the tape and you can move the tape read ahead backwards and forward. So there is this, there is this

Johannes Castner:

form of, but then there is no such thing because, uh, my computer doesn't have a tape. I don't know if yours does. No,

Neville Newey:

no. It, it doesn't, it it, it doesn't, it doesn't use a tape. But, but it, it does have memory and it, it is capable of, it is capable of reading a specific point in memory and, and that's sort of that sort of, but I can do that too

Johannes Castner:

equivalent. So I am a computer in that sense. Right. So that's, that's where it gets so ambiguous.

Neville Newey:

That sense. In that sense, yes, you could manually be, be a computer, but a computer is different to intelligence. Right. So, I'm just

Johannes Castner:

saying that, you know, the, the, the exercise of defining things is sometimes a little bit, you know, defining poverty, you know it when you see it right in poverty is a great example, right? I mean, how do you define it? Right? It's like, it doesn't necessarily mean what, what people, people would like to define it as the lack of money or something like this, but that's certainly not true, right? There, there are people who are not poor, who are missing money at the moment or maybe for some time. But there is, you know, it's extremely elusive when you get down to it. I think it's true of the state of love, you know, of any kind of concept that really carries some important meaning. The definition is not so clear. So I'm just saying because we can't define intelligence does not necessarily mean we couldn't build it. Right. And, and I agree that it's very difficult to define intelligence what it means, and it means different things in different people's minds.

Neville Newey:

Yep. It, it does. Um, but sort of getting back to your question about it, I, I think, you know, Um, to me there's a difference between intelligence and artificial intelligence. I think that the, a, getting back to it is very, very important. That, that we tend to forget because, um, you could, you could build something useful that, that wasn't necessarily mimicking, uh, or, or wasn't, or wasn't necessarily truly intelligent in the way that that human beings are. Right? So that comes

Johannes Castner:

down to the kind, right? But that, that reminds me of that, that paper by Thomas Nagel, you know, being a bat. Right. So consciousness is similar to intelligence in that sense. You know, my consciousness, I'm conscious, but I'm conscious in a different way than a bat would be conscious. So you could similarly say intelligence has similar kind of properties, right? You can be intelligent like a human. You could be intelligent like deer or a whale or dolphin or any other kind of thing. Or maybe. A machine could be intelligent, I'm not sure because Yes. Do you think that ha forming, formulating your own goals would be part of intelligence or is it a necessary thing of intelligence? This is something I've been debating with myself, you know, is it, do you have to be able to formulate your own goals and satisfy your own needs and wants? In which case, there's no way that a machine can be intelligent, right? Because I,

Neville Newey:

I, I would, I would tend to agree with, um, forming your own girls as being part of human intelligence. I, I think, I think that, that seem, at least that seems to be, it should be in part of the definition. And in that sense, I would agree with you that, um, uh, artificial intelligence cannot do that. Um, You know, uh, if you remember, there was that engineer at Google who got fired and then maybe he got re where he claimed that, I mean, we are talking here a little bit about senti. Yes. Also,

Johannes Castner:

there's all mixed in, I think they're not exactly the same thing. So sentient, you know, because sentient bacteria is senti, sentient, probably fetus, certainly not very intelligent. So I think you can be sentient without intelligence or with, with less intelligence, lower levels of attention. You can be conscious and absolutely not be thinking. Right. So I think, you know, um, for example, Buddhist monks will say that your, your consciousness is heightened when you turn off your mind, which means that mm-hmm. Intelligence and consciousness would be anti-correlated in, in that sense. Mm-hmm. Right. So, because, because any form of intelligence would be part of the monkey mind that makes you think about objects and, you know, manipulating things in your mind. When you meditate, you let all these things go. And you start trying to become empty of those things, right? So then if that, if, if it's true that someone who meditates is, has a heightened consciousness, in that sense, it would be exactly going against Ray TZ valve's thesis, which is that consciousness is a four, is part of the computation or it comes out of computation. I had this discussion with, uh, Shakti on, on the show actually about exactly this topic. You also agrees that computation and consciousness are two separate things. So sentience I think is yet another thing is about life, right? There's something in life or something dead. You could say a strand of RNA might have sentience, but probably not much intelligence. They're little difficult.

Neville Newey:

Yeah, I, I, um, I would tend to agree with that. Those are all three of those things are different and you know, you know, even today, researchers, philosophers, um, other. People from, from other disciplines. Can't agree on definitions of, of of those three things, you know? And, um, one, I think one major difference is, um, you know, between humans and, and, and AI is that we can, we can talk, we can talk about intelligence, we can talk about talking, we can talk about talking about talking. We have no limit to the, to the, the levels of, of, of meta abstractions that we can sort of go to. And this is definitely absent. Um, you know, again, if you go to sort of chat g p t, people do ask it philosophical questions. And you've seen these memes, you've seen these answers being, you know, becoming memes and going around, which makes it look like it's, um, sentient in some sense. But I, I think by most definitions of sentient, that won't be there, um, I, uh, um, I, I'm sure you're aware of Lex Friedman's podcast and he's somebody that I, I have great respect for and, and, and love his podcast and would agree with him probably on 95% of things. However, where I don't agree with him recently, he, he had one talking about this very topic where he suggested that he thinks that ChatGPT does have sentience. Um, and, and, and his reason, his reason for that, I can't remember who he was talking to because he has so many, uh, episodes that come out. But, um, when asked, you know, why he thought that his reason was because it seems like it. So, so, so I think he, you know, he's missing the a again there, uh, just because something seems like something it doesn't mean,

Johannes Castner:

but the great Allen Turing, that's what he put forth. That's the test for. Right. So this, this is the thing. You've gotta talk to it and see if you feel like it's conscious. If it is, then it is, but I of course disagree with that now. But of course that's easy to do in hindsight. But now I'm looking at this thing and I know how it's produced more or less. I don't know all the details of how ChatGPT is produced,

Neville Newey:

but, but I, but even, even, even by that definition, I, I think, I think we're not there yet because, um, I, I got an email the other day from, from a colleague of mine, and, um, by the time I got to the third line of it, I was already suspicious. I could tell this is something about it that you can sense that, and this came from, from an llm. And, and, and specifically he had got it from chat, G P T and I. And I said to him, you didn't write this gt? And he sort of laughed and he said, how did you know? And I said, well, Uh, first of all, I recognize, first of all, there are no spelling errors in it.

Johannes Castner:

Well, okay. Spellcheck, you can, you can probably, he, he tends to,

Neville Newey:

I see. So that, that was more contextual. That was more contextual clue for me because I, I know that he makes a lot of spelling errors than he doesn't run spellcheck. But, but even, even apart from that, um, the, the, the structure of the sentences, the way the sentences flowed was, uh, it was, was very uncharacteristic of, of him. Um, so I could, so I could tell, so by that definition, it didn't pass the urine tests. Um, now you might push back and say, well, that's just, An example of one specific person. Yeah, yeah.

Johannes Castner:

Because you're not actually testing consciousness, you're testing whether it was that person or something else. Right. Not necessarily whether it was a, it's a sentient thing or, or not a sentient thing, but rather it's that friend of yours or, or not. So that would be different fundamentally. But, but, but at the same time, you know, I can tell too, because it does use certain phrases all the time. It would say things like, we're going on a journey of ai, or if I wanna tell, you know, I, I played around with it enough that I know that it would say things about journeys. It would say things about delving in it would use certain characteristic phrases that it just can't get away from. It seems. And even when you try to say, don't use this, I tried this, you know, don't say, delve in and don't say, uh, going on a journey, it will still sneak it in somewhere. It will take it out of that particular place, wherever it was before, and it will stick it into some other place. It just seems to be in incapable of escaping its own little metaphors and its own sort of patterns. There's certain patterns that, that you can tell still, but I think, you know, that that's probably something you can fix within short order, I would say. And you pretty quickly might arise at something that passes the Turing test. I mean, there are definitely different things that claim to have passed it, you know, whatever it is. But that test is obviously flawed. Right. And, and, you know, could you, could you tell me a bit about why, why, why is it flawed? What is exactly wrong with this test and is there a test that could actually get

Neville Newey:

Well, I, well, I don't, I, I don't know that there's anything wrong with it. I, I, I, I think what, what he proposed, I actually think it's quite, it's quite good, but it again goes to what is our definition and I believe. Hi. His, that was his, his test for intelligence, not for sentience. I don't know. I thought it was consciousness. I don't know that he had, I thought it was conscious. It was consciousness

Johannes Castner:

senti.

Neville Newey:

Okay. Yeah. A different thing. Yeah. I, I mean, I mean, going back to sort, sort of the bats, the bat story, right? Um, by that, by, by some definition, it, it is a good definition, right? And I, I don't, I don't know if we'll ever even have a proper definition. It goes back to how do we even define it? No. Right? Or

Johannes Castner:

consciousness in water, any of these

Neville Newey:

things, or, or consciousness or, or you know, this idea that in fact maybe, um, sentience is just an illusion. Maybe, um, the ability to, I've heard

Johannes Castner:

that before. Uh, Daniel Dennis is a, is a proponent of exactly this. But when you, when you interrogate these people who say that, I feel that in the end they always behave as if, as if they really, truly assume people to be conscious. They don't seem to be able to get away from this. Yes, it it, but

Neville Newey:

what does it even, what does that even mean? What does, what does, what does it mean to assume that somebody is conscious? I, I don't know what mean,

Johannes Castner:

I guess the way that, you know, John Searl and others would, would put it is simply, I guess a working definition of, of consciousness is that there is a subject inside of you somewhere where, I don't know, but there is something inside of you that when you feel the wind on your face, there is a you there that feels there is somebody, or some receiver of experiences and that would be the conscious field, right? So you go, you go through a cornfield and you feel the wind on your face. There's somebody feeling this wind, right. And, and, and con, you know, ChatGPT doesn't have that somebody feeling anything. It can say that it feels the wind on its face, but it has no face.

Neville Newey:

Well, I, I mean, yeah. What, what about a, a, a wind, a wind meter, you know, a weather instrument that measures wind speed. That that can feel wind. Does it feel

Johannes Castner:

it, does it, is there, is there something there that feels it? Or is it just an instrument that can tell you, you know, I took a reading over here.

Neville Newey:

Well, the, I mean, you, that, you're right. That, that's, that is, that is a, a good question. But then you could just ask, well, maybe the brain is also just an instrument. It surely

Johannes Castner:

is. It surely is. But it, it does seem to produce an experience of something you see, I guess the way people put it is consciousness is, is about there being something that it feels like being, you know, it, it feels like there is, is a way that you could say, you could describe. How it feels like to be Neville Newey or there is a way that you can describe how it feels like being Johannes Castner or, or a bat or a horse, right? They, those are, those horse has this experience of what it feels like being a horse. Whereas I don't think ChatGPT has an experience of what it feels like being cha I think that doesn't even make sense. It seems nonsensical the moment I start saying it. Yes, yes. And I think that's,

Neville Newey:

well, I I, I would, I would, I would tend to agree with you. I mean, it does it, but it is fluid as you've suggested, because as you start moving down the sort of, um, uh, you know, the animal chain, let's call it, so, so yeah, we, we sort of, we sort of think, you know, dogs and horses and other higher mammal, specifically mammalian life forms, we sort of do think of them as being. Conscious and, and even sentient. And, and, and you know, God forbid we bring the word soul into it. Oh yeah.

Johannes Castner:

Yeah. I think that's just trying to describe the same thing, right. With another word, soul, I think is an older word. Consciousness is a newer word, but I think it comes to the same thing, right. What they meant by soul back then. And, and I, it's interesting because Descartes even use this concept of consciousness. When you look at the actual French, it says, I am conscious, therefore I am, I cannot doubt this one thing. That is, that I'm conscious. I can, I can doubt everything else. I can doubt that you're conscious, that you even exist. I can doubt that this room exists so that there is a world out there. But I can't doubt that I exist because I'm conscious. And that's, that's essentially, and now people like Ray Kurzwell and Daniel Dennis seem to turn this on its head. They try to say that they can actually doubt themselves to be conscious, which I think is kind of, uh, I don't really buy it. Well, it's almost circular. It's, you know, how can there be a world? Yeah. It's, no one can be conscious of it. Like consciousness essentially is necessary for anything to exist. I think it is an absolute fundamental, yeah. Necessity. You cannot possibly doubt it. Therefore, I kind of find this whole mind uploading and all of these things kind of silly, but, you know, what do you think of that? Mind uploading or, or forking yourself off As one of my guests put it.

Neville Newey:

Well, I, I, I, I don't know about that, but, um, you know, it does raise, you know, this whole question is, is very interesting to me because Okay. We, we've, we, we kind of, you know, generally agree that, you know, the higher her mammals have it, but what about, what about reptiles and, and birds not, it starts to become less obvious then, and you've, you've, you've been to my place I think in the time that I had. I had Guinea. Yeah. Oh, you don't have them? I, I, I, I, I, I don't have them anymore. They all got, um, eaten by predators, so they're no longer conscious. But, but were they ever, so I, I observed these birds, you know, um, daily. I, they, it gave me great pleasure to sit, um, you've been to my place to sit outside and, and watch these birds, and, and I watched their behavior and after a time, I, I, I came to the conclusion, you know, thinking about their minds and how they interacted with each other and, and so on. I came to the conclusion that these birds are running on an algorithm purely, purely, purely on an algorithm. Um, now I might not be correct about that, but it seemed to me that they're essentially are reacting to their environment and to the signals they're receiving from the environment in, in a very specific way. And, and that's all they seem to be doing to me. Um, Un un until they died now.

Johannes Castner:

But does that contradict, convinced that, because I think we are running on an algorithm too, we have drives, right? We want to survive. We want to somehow have children. Why do we want to have children? Well, that's programmed into us. It seems that this isn't a choice. It is, it's an illusion of choice. So I don't believe necessarily in free will. At the same time I believe in consciousness. So this, this may not be necessarily contradictory. I think that they're running on an algorithm. I think evolution is an algorithm. So everything runs on an algorithm Sure. From that perspective. And I, and I just think that, okay, this algorithm, uh, that, that makes me do the things that I'm doing is very predictable because that's what, how Facebook predicts what I'm gonna do next. Right? And we, we all are kind of, you know, it, it, it's sad to say, but, you know, we wouldn't wanna think of that, uh, think of ourselves as that. But we are quite manipulable, predictable. Animals. Right. And, and I think that the free, free will is actually kind of an illusion. I do think that, I think, you know, we have the, the, the, the, the freedom to choose what we want. I think Albert Einstein, or was it, uh, Schauer, I think Schauer said that we have the freedom to choose what we want, but we don't have the freedom to want, you know, to choose what we want. Right. Like, like in a, in a way, like, you know, we, we, we, we can fulfill the things we want. We can fulfill the desires and the things that are coming our way by taking certain actions. And we have that freedom, but we can't actually choose our desires themselves.

Neville Newey:

Yeah. That's, that's, uh, that's interesting. And again, I would ask is that, does that flow down to the lesser life forms? And you mentioned bacteria earlier or you know, what about viruses and bacteria and, and, and, and very, very, very primitive life form. Does it, does it translate or is, is a virus conscious?

Johannes Castner:

I, I, yeah, I think this is becoming harder to say, right? Because I think, I think that about a dog or a lama or a horse or cow, I would be very, very confident that these are conscious because simply they're physiological, they're machinery, they're brains, uh, and they're bodies are rather simple to us. I know, I'm conscious this thing is built very similar to the way I'm built, and it's coming from the same evolutionary pathway, right? It, it's very likely that it must be conscious, but when it comes to grasshoppers and butterflies, they're evolutionary rather far away from us. And so it's, it's difficult to say, I can't really, you know, I think a bat is conscious because similarly, it has a little brain, and I think even your, your Guinea fowl, I think they're conscious because they have little brains that are smaller than ours, but are in every other way similar to ours.

Neville Newey:

Well, a, a cross offer also has a brain, but, but. Yeah, but it's interesting that yes, they're evolutionary very far away from us, according from our reference points, but nevertheless, they, nevertheless, they share some amount of DNA with us. So, um, you know, um, even, uh, mushrooms, you know, share some DNA with us. And in fact, we are closer to mushrooms than we are to plants. So if, if aliens, if, if such things exist, and I, at this point, I assume that they do, I'm talking about, um, int intelligence, there's that, that, um, slippery word that sneaked in again. But if intelligent aliens do exist and they arrived on earth, Would they say that we are that far away from grasshoppers? We're probably closer to grasshoppers than they are to us. Right. So then, so, so it's the reference. I

Johannes Castner:

think it's very, very plausible that they're also conscious. In fact, if, if you ask me, what would you bet on, I would bet that they're conscious rather than, that they're not conscious. Similarly, as if you ask me, are you willing to bet that your cell phone is not conscious? I would say, yes. I'm willing to bet that my cell phone is absolutely not conscious, but that the grasshopper is, whereas, you know, but I, I am further away. Like, you know, my confidence in the grasshopper being conscious is much lower. And my confidence in a horse being conscious, for example. But let's, um, I wanna, so this is, but I wanna move on. I wanna also kind of cover a little bit on, on the future of data science and the future of ai and where you see these things going. Are you alarmed as, uh, also the, the, you know, Sam Altman is, is alarmed or said he's alarmed, but then, you know, he says that he wants regulation for ai, but then he says if the European Union does too much regulation and it bites, he will leave the European Union. So the same guy says, uh, it, it's a little bit confusing what he means, whether he's really alarmed or he is just mm-hmm. Doing this out of some kind of incentives that he has for saying that. What, what do you think about it? Are you alarmed? Are you worried about it? Do you think it's gonna make us more productive, happier, healthier, wealthier? Or do you think it's going to do that plus something else? Or what, what is your view on this?

Neville Newey:

I'm, I'm not alarmed. I'm not alarmed at all, and I see. I see articles coming out daily where, you know, it's, it's, it's, um, fear, you know, trying to inject fear into the public and, you know, everybody's gonna lose their jobs because of chat. G P t I even see it amongst data scientists and that, but I'm, I I think that's all, um, incorrect hype, you know, um, I, if, if anything, when this whole sort of, um, and it is a big milestone. It is a big milestone even though it's not truly intelligent. But the whole, um, you know, emergence of LLMs in the last two years, three years or so is, um, certainly is a big milestone, but I see it as an opportunity. I see it exactly the opposite of being along. I just, in the own models that I'm building for, for, um, you know, um, various clients I have and so on, the quality of them is just so much better and you can do so much more. Um, You know, using, using, you know, using Embeddings for example, that you can get from open ai. Um, so to me it's, it's the opposite of alarm. I, I see it as a new, um, a new opportunity, a new area of, of opportunity. Having said that, I can see how it, it could potentially, um, be harmful. Um, and in, in sort of the same way that, that, that, um, everybody seems to agree that that social media, for example, has been harmful. But the way I sort of like to think of it is if you think about cars, right? If you think about automobiles, there was a point in time when, when they were incredibly harmful and, and it's in us, it's taken us a hundred years to sort of get where, where, um, cars are pretty safe right now. And I'm not only talking about safety in terms of, of collisions and accidents. I'm talking also about. Um, emissions control and, and, and the quality, the air quality and so on. And if you think about it, it wasn't that long ago in the 1970s and possibly even in the eighties, you would've heard that you couldn't walk on the streets in LA because, because of the, the exhaust fumes. Um, I was, I visited some big Asian cities in the, in the nineties, like, um, uh, Bangkok and, and uh, Jakarta and some of those places where, uh, you know, I'm asthmatic and I could barely walk in the streets there. It was, it was so, and I don't know if you go there today, what it's like today, but LA has gotten cleaner over my point is that 30 years for sure. It absolutely has gotten cleaner. But California has like very strict laws about emissions. But my, my point is that the, any technology can. Be harmful in its early stages. And then we sort of, as it evolves, we get sort of more regulations. And some people understandably hate those regulations, but they do, they sometimes can do, you know, do good for, for humanity in general. I think exhaust emissions control, controlling cars, for example, was a very good thing. Some people might disagree, but it's certainly made the air cleaner and, and more healthy. And I think we're at, we're at, if you look, and that's a hundred years later, I mean, when were cars invented, we were 120 years later, it's taken for us to get to this point. Things like, um, you know, um, chat G p t and, and, and the, the dangers that are being proposed. Yeah, probably some of them will. Like what, which

Johannes Castner:

one are you worried about? If there is any one that is, can you sort of rank them?

Neville Newey:

Well, I, I think, I think, I think what's a little bit, um, Alarming is, is, you know, people submitting, students submitting essays, um, you know, that they, that they didn't, oh,

Johannes Castner:

really? This is what you're worried about. I, I'm actually not worried about this one at all. Well, well, I don't know if worry,

Neville Newey:

I don't know if worry is the right term. I mean, it, it's, but it's, we have to work out what are the protocols behind that. Because if, if we suddenly produce, you know, a whole, a whole generation of students that have never, ever written a single thing, is that, is that good or bad? Um, but it can't really

Johannes Castner:

write. It's similar to, you have to prompt it. You have to change things. You have to constantly be in dialogue with it to write something decent. Right. So it's not a really, I mean, I, I, I mean, what, what about this? Okay. I thought about it and I thought, isn't this similar to the calculator? The way the calculator was introduced at first, everybody was, oh, we can't let people use them in the test. Why wouldn't you? Because in reality, when they're going out there and they're doing their whatever accounting jobs or not whatnot, they will use and they should use calculators. In fact, they should maybe, maybe schools should be about learning how to use chat G B t better to write more interesting essays. And, and you know, instead of you, I,

Neville Newey:

so I mean that's, you know, going back to your question about is, you know, is it a, was it alarming when these things came out? No. And as I said, I wasn't, a alarm disorder is a new opportunity and already that's opportunity showing itself. I, at that time, I thought, well, this is certainly gonna create new types of jobs. And already, if you look around the literature already, there's a, there's a term that's, that's come out, which is called prompt engineering. Now, prompt engineering, that term didn't exist five years ago, four years ago, maybe not even three years ago. But now there's a whole science. That has come up around prompt engineering. Right. So what is prompt engineering? Prompt engineering is precisely what you say. It's how you would construct your, your, your, your prompts, um, to, to get something useful out, out of, out of. So I was

Johannes Castner:

telling Ben Lag, you know, I had a conversation with, um, a couple episodes ago. I was telling him that, he was asking me, do you think that it's going to change programming completely? And I told him, I think that people will program in English instead of in Python and in Java. But in general, you still have to program, right? You have to know what you want, what you wanna get from the machine, and you have to be able to tell it exactly in a way what it is that you want, right? And what are the outcomes you want? Maybe you have to do some test driven development, but you'll still have to essentially tell the machine what it is that you want. Then I think I just call that programming, know what I mean? Prompt engineering. Isn't that just the new programming?

Neville Newey:

Um, uh, well, I think part of it is yeah, part you, you might create a prompt, which, which, which is programmatic in, in nature. Sure. Um, there's other, there's other parts to it. Um, for example, um, you could create a classifier, um, without writing a single line of code by saying to, for example, you could say to Jet G G P T, in which of the following categories would you place in this item? Is it a bird B, uh, um, a dog, c a horse.

Johannes Castner:

Right. But then , you have to show it the horse. Horse and give it. How do you do that?

Neville Newey:

Well, well remember that opener. I also has, um, you know, del e So Delhi was, is, is also one of their offerings. And they are, um, like we have large language models. We also have large image models, so a combination of these things. But, but it could even be a piece of text. You could, for example, let's forget about. Images for the, for the time being. You could even say, what, what does the following text describe? And then give a description and, and then you could, you know, and that's, that's kind of prompt engineering in a sense. So you could build a very simple classifier simply by actually building a prompt. So in that sense, that's really interesting, right? Because it is, it is a,

Johannes Castner:

in, it's more of an open world kind of classifier as opposed to traditional classifiers where you have to give it the 10 categories that it has to make a prediction on which one of these 10 things it is. It seems that ChatGPT is able to classify n things like a classify a thing according to n things, which is n is completely open, right? It can, it can, you can feed it a piece of texts and what is this about? And it can say something that it's never said before, it can give it a new label. And you can understand completely new things. Yeah. In that sense, it's actually a better classifier, or is that worse because y you know, the explosion of labels that you might get from he

Neville Newey:

Yeah, well, certainly the part of the prompts engineering is to try to control those labels. So if you have a very specific custom set of labels, um, uh, I, at this point in time, you know, I've been experimenting with this quite a lot recently, and it's, it's still, it's still, in my opinion, better now to just use the, the embeddings and, and build a traditional classifier, but using the embeddings as features rather, rather than trying to do an n l p way of doing it. Because as you say, you, you, you're gonna get possibly

Johannes Castner:

Could you, could you tell the audience really quickly what an embedding is? Just because I think people who are listening, watching this now don't necessarily know what it is. Could, could you just give it a quick, uh, explanation?

Neville Newey:

Yeah. Yeah, so, so, um, fundamentally, um, all computers really are, are, are, are dealing with numbers, right? So, so even though we might think of an NLP problem as dealing with text, um, the algorithms and so on, sort of at the level below that or, or in, um, or, or basically numeric. So, and embedding is just a, a numeric representation of a word. Um, and typically it's, it's not just, um, a scaler, usually it's, it's, um, a vector, what we call a vector, which is, you know, a higher dimensional, um, space of, of, of, of scalers. So, and embedding is really, Um, a, a, a numeric representation of some text. It doesn't have to be a word, it can also be an entire document. Um, but it's some representation that within that lecture space contains some meaning. So, so two, two doc. And these have even become multilingual now. So two documents that are, um, describing perhaps the same thing, but in very, very, very different words. Um, in good embedding model, they would come out with a very similar embedding. And in, in fact, that is how you would know, uh, that they describe the same thing is because, um, they, they would, they would sort of have vectors that are, that are in that vector space. Some, some

Johannes Castner:

small decision. The chat gbt, when you go to the API level of it, gives you these embeddings, right? So if you feed in a document into it, it can give you back and embedding in, in the API version of

Neville Newey:

it. Is that right? Correct, correct. Absolutely correct. Yeah. And they have a lot of different embedding models. Um, they, you know, um, and these embeddings, um, you know, they can get very, very sophisticated. Um, I believe they're what they call the Da Vinci model, um, of embeddings, uh, gives you, um, um, vectors that are, uh, around 11,000, um, um, um, dimensions. So this is, this is uh, massively, you know, huge, um, sort of, sort of vector space so you can get quite sophisticated. And it brings us back really to the thing about how, you know, the a in ai, I, I don't really believe that inside our brains we have embeddings in, in the sense that we have a vector space that we, that we kind of compute in. If you show me two pieces of text, Um, uh, I, I, I can tell you they're, if they're describing the same thing, sure. But, um,

Johannes Castner:

I mean, you would agree that if bird and a, and a plane and a butterfly, they all fly, right? So it doesn't really necessarily mean matter how a butterfly flying, you know, when a butterfly flies it being different from a bird or from a, from a, from a plane. What matters is that it functionally does the same thing. Right? Is it, don't you agree with that? I mean, so, so the exact implementation of how intelligence works, I suppose that's kind of part of this AI thing, right? Is that we can produce intelligence by loosely basing some things on the brain, but not going too close to it. Not having to really go, you know, rebuild something like the brain in order to build some something that can take inputs and convert it into outputs the way that the brain or, you know, Basically mimicking the outputs from the inputs that, that the brain would produce, but in a completely different way, potentially. I think that wouldn't matter, right? Or, or do you agree or do you think it's

Neville Newey:

Yeah, no, I think it, it doesn't matter. I mean, there's a practical use regardless of, of how it's done. My point was, my point was more the way that we are doing it with AI is, is, is, you know, it's, um, I'm talking specifically about tasks that use these embeds. I, I, I think that's very far, um, away from the way that humans doing it. But again, the a, the A in AI is very, very important is, although for

Johannes Castner:

me, like, what do you think of this? I, I, I, I wanted to relabel it as an augmented intelligence because I don't think, you know, again, because of this consciousness element, right? So if you don't have a, a thing being intelligent, because there is no thing in itself, there's just, uh, you know, there is no internal. Anything to it, right? There is no machine that's experiencing itself. There's no self there. So wouldn't it be smarter to just say, because, because it always augments our own intelligence, right? It does something for us. There's something for some human who's asking it to do something. Shouldn't we call it augmented intelligence? Isn't it ultimately just augmenting our own intelligence?

Neville Newey:

Yeah. I, I, I, I agree with you. The name, the, the name has, is probably the name. Ai. Artificial Intelligence has probably caused a lot of, um, you know, misnomers. I would, I would even propose that you go further than that and you take out the word intelligence and maybe call it augmented cog cognizance or, or augmented thoughts, or, or, or augmented reasoning or, or, Um, something that takes away, I've heard word cognitive since we

Johannes Castner:

can't really even just heard this term before. Cognitive services or, this seems to make a lot of sense. It's a type of cognition. So

Neville Newey:

process. Yeah. Or, or, or, um, reasoning. Reasoning assistance or assisted reasoning. How about assisted reasoning or, I like the, I like assisted. I like the a turning into assisted because it's, it really is assisting us to accomplish something. Um, so

Johannes Castner:

one more thing I would really love to get from you. Assisted. What do you, how do you see the future of data science? What do you think a data scientist will do in 10 years from now? What will this job be? Will there be such a job called data science and, yeah. So I guess,

Neville Newey:

yes. I absolutely, there will be a job called data science, and I'm not sure if it'll be in 10 years time, but there'll come a point where, Perhaps it will be the only job. You know, I think we, we might all become data scientists if, if we get this, um, assisted knowledge or whatever we assisted intelligence, whatever you, you, the term you're going to use. And if it becomes so powerful, and if Moore's Law, if, let's assume that Moore's law also applies to, um, ai, um, it's, it's, um, you know, going, going to, it's, I believe it can be for the good of humanity. And it, it, it, we are gonna go through this dirty patch, like we went through the seventies with a, with a, you know, the air quality problems with, with automobiles. We're gonna go through that. So maybe it'll take longer than 10 years. Maybe, maybe it'll be 20, 50, a hundred. Who knows. Right. But certainly, um, it's, it's. Eventually you're going to be for the good of

Johannes Castner:

everybody. I really hope you're right on this. I hope it doesn't take so long. I hope we can get our act together faster because the, the path to it could be quite painful, right? If it's not done right, it's,

Neville Newey:

well, look how long cars have taken us and a cause of fast simp simplicity. Maybe we

Johannes Castner:

can do better this time around because we can learn. Uh, right. I mean, I know that Hegel said that one thing we can learn from history is that we don't learn from history. You heard that. I hope he's wrong on this. Um, but yeah. One more thing. Can you tell the audience something that they should take home with them from this discussion? Do you, do you, how can they stay in touch with you if they're interested in talking some more to you about these topics or learn about what you are up to? Uh,

Neville Newey:

Well, um, I, I assume you, you can put my email address on the, you know, on, on, in the notes of, of the podcast. And I, I mean, if I'd want people to take away anything from the conversation is don't be alarmed and, and, and, and, um, uh, you know, be optimistic, I would say is, is, um, I, I'm very optimistic about, about what it can do for humanity. So, uh, I, uh, read, read the alarmist articles, but, but don't take them for, for their, for their word, you know, remain positive and, and, um, you know, um, stay happy.