Utopias, Dystopias and Today's Technology

ChatGPT and Beyond: Unpacking the Potential of Large Language Models

March 30, 2023 Season 1
Utopias, Dystopias and Today's Technology
ChatGPT and Beyond: Unpacking the Potential of Large Language Models
Show Notes Transcript
In this episode of Utopias, Dystopias and Today's Technology, host Johannes talks to data scientist Alan Tominey about ChatGPT, GPT-3, GPT-4, Meta's Galactica and other large language models. They discuss Noam Chomsky's critique of these models and their ability to generate content from lived experience, but also explore the potential business opportunities and use cases for such models. Alan shares how he uses ChatGPT to generate potential beer recipes and name lists, among other things. Whether you're interested in AI and language models, or curious about how they can be used in practice, this conversation is a must-listen.



Johannes Castner:

Hello and welcome. My name is Johannes and I'm the host of the show. Today I am here with Ellen Tommy, who is an accomplished data scientist, um, a development engineer, a data scientist at Petroleum Experts, is a experienced in creative petroleum engineer, software development specialist, and data scientists with a demonstrated history of delivering value to the oil and energy industry, a broad range of experience from machine learning and AI to reservoir engineering, field management and optimization process and flow assurance, moving on up through the value chain to developing and deploying the digital oil field, a rigorous academic background, and over a decade of mentoring and training other scientists and engineers. Hello, Alan. How are you today?

Alan Tominey:

Hello. I'm fine, thank you.

Johannes Castner:

Great, great. So let's get into this really interesting topic that we, we have for today, which is, uh, large language models of which the very famous chat g p t everyone is chatting about is a part. So, uh, , yeah, let's get into that and let's start, I guess with an overview of what are large la uh, language models to begin with. And then we can differentiate a little bit what chat g p T is, because it is, turns out it's not quite only that. So we get, we we'll get into that. So yeah, if you could give us a little introduction to the topic of large language models.

Alan Tominey:

So, yeah, so basically a large language model is effectively, uh, it, it it's the, where AI is at the moment really when it comes to sort of interacting with, um, with the real world. You know, there's a, a great sort of schematic diagram that you'll see where you have sort of data science is a big blob, and then inside of data science you'll see machine learning. And then inside of that you'll have AI and large language models are kind of, uh, another dimension on that because effectively they are the a inside the AI bubble, but they are largely speaking software. Um, and the reason I'd say software is because they, they sort of differentiate from just simply being a quite a dense neural network that can, you know, work out some, some text or differentiate between some categories or some images. Um, and they're, they have logic in the background that then kind of takes the output of these AI models or deep learning models and then uses them to kinda reconstruct something on the other end and, and then spit it out. So it's, it's a software, um, Developed AI system. Uh, and there's been a number of them for, for a number of years. I mean, chat g PT is obviously the big one that everyone's really excited about and I've been talking about, um, and I've been, you know, creating column inches for, for months now for, uh, with chat G P T. But for, for, you know, some time a lot of companies like Meta and, and Google, um, have been generating them. There's some open source ones at Goose AI have been doing stuff like that, um, for, for some time. Um, but they're, they're really, um, at the forefront of, of what people are beginning to see and interact with when it comes to kind of ai and, and it's the, this kind, uh, people are getting a really excited about using it and b a little bit scared about what it means for, for us all going forward. And then, you know, I think, I think we'll probably end talking little bit those topic.

Johannes Castner:

So, um, lemme me just, uh, uh, can we go one step deeper and can we just look into the workings of that? How, how does such a model work?

Alan Tominey:

Uh, so, so the, the basics of the model are effectively that you, you, you have, an API interface that will, uh, interpret your, your, your text, your input. Usually that will be through some sort of nlp, um, which natural language processing. So there'll be. Uh, text point. And that will then kind of move into an algorithmic background where effectively you're gonna have some methodology for search and understanding what's going on in, in a, in a database kind of format. But you'll also have another part of it that will have kind of a, a generative, uh, part of it. And the generative part can be quite complex. In some cases. You can have, um, a, a generative algorithm that will basically create, you know, text based on what you input. So there'll be a natural language processing part that this sort of a UI part of it, uh, where the, it will interpret what you want. That will then categorize what you, what you think, what it thinks you want. So say I say, I want you to write me a story about kittens, then basically it's gonna kind of say, okay, story kittens, and it knows what a story is, it's gonna say right, that that's gonna be a bunch of text. And the generative part is, Kittens. So the key word identifiers, um, in there will kind of spin up and say, right, this person wants a large amount of text about kittens and doesn't want a picture of a kitten made out of text. You know, there, there's a, that's where the AI part comes in, uh, to try and understand what it is the person wants. And then basically the generative parts can be quite complex. You can get, um, it can basically begin to sort of look for and, and understand what there is to do with, you know, uh, story structure, uh, do you want characters and all this kind of stuff. And, and that, that's a large amount of, that is based on how it's been generated and trained and created, uh, and what it's been allowed access to during its development phase. So has it been allowed access to, you know, a database of books by, you know, a bunch of gothic writers or a bunch of kids, uh, you know, authors and, and kids stories? It'll basically create a story based on what you wanted, uh, and then spit out the other end. So there'll be, and then the, so basically it has to kind of come back around and, and print you out a story based on what you put into it. So there's, there's lots and lots of layers and levels in these models that basically create, uh, this sort of, um, categorical sense. And then it'll spit out the numbers at the bottom of it. Basically,

Johannes Castner:

it, it's type, it's a type of transformer, right. If, if I'm, if I'm correct about it.

Alan Tominey:

Yeah.

Johannes Castner:

Or at the heart of it. Part of it,

Alan Tominey:

yeah. Yeah. At the bottom end of it, it, it is a, a large, massively paralyzed process of, of, of number crunching. Um, you know, all the, all the pieces of text will have a number, a numerical format, and effectively there'll be a large bank of GPUs somewhere processing a bunch of this data and turning it into, into. What you think, what you see when it comes outta

Johannes Castner:

your screen. So is it true to say that because, um, I suppose, you know, it, it it is, it's sort of like one of those models that predicts one word at a time, or does it, does it predict phrases at a time or, it is sort of, it, it is a, a form of predictive model, right. If I understand it right. So that, that it, it predicts sort of what comes next after, uh, is, is that right? Is that how you would

Alan Tominey:

characterize it? Yeah, so yeah. So a lot of these, a lot of these models are, uh, and, and they're, they're kinda recursive. So they, they'll basically, um, once you, once a pattern begins, then it will, it, it's almost like committing itself to a path. Once a pattern begins, there will be an end to that and effectively there there might be some sort of degree of randomness that added to it to give it a certain amount of, um, Just a certain amount of kind of realism. Um, one of the things that machine learning and, and certain, and certainly when we do a lot of, uh, dense models, there's a kind of a, you, you kind of sometimes drop some parts of your model just to kind of see if you can just whittle away and trim away some of the things that you might not want to keep in your finished product. Uh, and that's a kinda a semi random process. I mean, obviously you wanna start from kinda a unknown point, but, but then generate kinda a random process. So there can be a certain amount of randomness in these models, but very often what happens is because it's recursive, there'll be a starting point, some sort of seed, some grain of what the user is imparted into the model, and it will then begin to kind of spin. And at that point it starts to sort of look back and say, right, well I was talking about something for the past couple of sentences or phrases or words, I'm gonna keep going in that route. So it starts to input its own output into the, the generation of, of the, of the model. Um, and that then tends to kind of, uh, it can lead to some funny stuff. I think a lot of, a lot of the articles that you'll see about, about these models is where they talk about how they've generated just some nonsense. It it comes from processes like that. I believe. That's my understanding, where they kind of, it, it starts to just say something that is just complete gibberish. And, and you think to yourself, how did you get to 200 words of that ? It's just because it sort of kept going down its little path and thought, right, I'm finished. Here you go. Here's my answer., Johannes Castner: yes, yes, yes, yes. Uh, but, but there's something. Okay, before we get into, okay, chat, G B T is a bit different, right? So there's, it seems to be, yes. Out of all of them the most. Coherent one or the one that makes the most coherent sense when it speaks right, or when it chats Yes. Away. So that's very interesting. There, there, there are new, uh, so, so before, it seems like they've been working on this for quite some time now. I've, I've been reading about there various such approaches. Obviously none of us can train such, such a model, right? So, so we we're both data scientists and we, neither of us can possibly have trained it. Right? Why is that? Because it's too much compute power, right. That we don't have. So that's, uh, so, so there is that. Um, and so, so, um, but, but you can read about it. It's very, it's very clear how it's done. They, they do disclose quite a lot about it. So, so for one thing ChatGPT is based on GPT-3, right? That's which is the large language model, or at least GPT-3 is a part of it, right? And then there are other aspects as well, right? Uh, such as which is, uh, uh, a very interesting, um, also, uh, produced by open by open ai, um, a product that that basically helps you code , that is sort of way, uh, that, that is also behind a tool that is already rolled out, uh, on, uh, on GitHub, if I am, if I'm right. So it's there, there seems to be a partnership with Microsoft involved there, because Microsoft owns GitHub, if I'm, if I'm correct. And so they're using this thing open AI Codex. So GitHub co-pilot is based on Codex as well. So, but, but Codex is also underneath chat, g p T. Right? So that's, it's very interesting how they, they, they fit these, it seems as, as though there's a modular approach at work here where there's several modules that have been developed in isolation and then, uh, are combined ink, and that makes, this, gives this chat such power. Is that, is that the right way to read it? Yeah, I, I think that, that, that's kind of, yeah, and I think that, that, that must be. There must be something else. And this is the thing often with, with like, uh, like software, it's the bit that they don't tell you about. That's the, the, the, the revolutionary part. I mean, they're being very open about, you know, g PT three G, PT three being kind of the, the part of it. And, and you know, there's, there's hundreds of applications using G P T three under the hood. Um, you know, so, and the, you know, the open AI has been, you know, distributing it quite, quite happily for a long time. And like you say, it's kind of, it's been used for things like, uh, you know, when you are using gi, um, GitHub and, and stuff like that to help people with coding. And, and that's quite, that's quite useful obviously there, there's obviously some kind of very, very well built. Code model. And it's actually interesting that it's the, it's the GitHub people that have done it. Because if anyone's got a huge amount of code in their database to train something on, it's gonna be GitHub. You know? And that's, that's interesting. And that's like the, so when you do, if you do like a, a, a, you know, the TensorFlow kinda certification or, or, or some of these Google kind of courses, um, one of the models that they'll help you, that they'll do as like a part of the courses is the, the, the one where they took the, uh, that huge picture repository and they trained the image recognition on the, on the picture repository. Inception. Inception,

Johannes Castner:

yes. Yes. Yes. Inception. It's, it's available. I think a trained is available on Tensorflow Hub

Alan Tominey:

Massive. Yes. Yeah, exactly. So you can take that. But that was trained using, um, some massive online repository of images to, to categorize images, which is, it's perfect cuz like you said, it's not something that, that we could do. I mean, I've, you know, I've basically got pictures of dogs, cats, and my kids running around my garden. So unless there's, those are the categories I wanna get, I wanna get, I'm not gonna be able to, you know, build a model that can do that. So, so these people have, you know, obviously done something with the code side of things, but that's, that's gonna be, you know, a huge, a huge asset. But then, then what they've done obviously for chat g PT is put this together in a really clever way. And that's the bit that, that, that's the bit that interests me, I think. Where, where you. Uh, sometimes when you, you're building physics-based models, um, you, you need to go back to really simple concepts to try and get these things to work. So you can go for like, like minimum action or least action, um, in terms of physics-based models. And that's kind of one of these things that a lot of, you know, to like, look for like a, a rope that's dangling between two poles. You can kind of calculate and model that or simulate that and, and look for like least action, uh, in the little bits between the, the, the rope. And it's a really simple physics-based concept. Or you can try and kind of really model the, the, you know, tensile strength and all that kind of stuff in the rope, but it's kind of not gonna, it's gonna be difficult. Um, so I, I really hope that the chat g p t guys have figured out something really, really simple about putting these things together and that's where the stability comes from. Cause the simplest concepts usually leap to the most stable output, you know? Yeah, absolutely. That's quite.. So, so now let's,

Johannes Castner:

okay. That's, that's great. So we, we, we have to admit that we don't know everything about it, right? So Chet is somewhat a partial mystery to most of us, I suppose. And then partially we know this, but we do know what, how large language models work, generally speaking. But, uh, and, and then we also know that there is, um, another element, the chat, G B T seems to be this, this human supervision, or I guess they call it machine teaching. Where, where people, you know, like you said, there's a story about the cat, right? The story about the cat. Now we're gonna tell you whether that story was good or bad, right? And then humans will sit there and say, okay, well that story's not as good as the other one you wrote, and so on. And, and then it, it knows, it gets feedback. So in, in a similar way as we would, when we, when we do things, uh, you know, we, we will read it to our friends and they will say, well, that one is good, or This one's not so good. You have to add something, you know, and so then, There, there are some elements of that that is, that are kind of similar to, to the way humans would learn to write better stories. Um, and, and there is this, um, so, so the human in the loop is certainly there, and even they're write about it, right? They're writing about even that, the customizability of it, uh, that they're working on. It's really interesting as well. So they're, um, this, this, um, we, we, we briefly talked about this in a, in a previous episode with, with Dorothea Baur about the, the, the, the, you know, what, what one might call democratization of, of these kinds of tools. And we raised some issues there with that and even the meaning of it. But the point being that it's very interesting that you can then, um, use your own values. You can in a way teach the ai, your values, um, the system, your values, and then it will represent and write it not only with your style. So, so the style, I suppose, was the first improvement that they did, right? So you can now write in the style of, I don't know, Alan Tominey, right? Like I could write in your style, right? I can borrow your style of writing. But, but the next step is the values. So the ethical values and the ideas of what, how you interpret the world and, and what you find, uh, good and bad in the world. It can actually take on, or, or that will be the next step. They, they're not quite there yet. I, uh, they're, they're writing about how, how they're planning on doing this. And then interestingly, you will not be able to blame it on the AI anymore. So you will say, well, um, the AI helped me write it, but um, but it's me . It's my, it's my, these are my values and this, this is what, it's just representing me essentially. And I guess that's, that's a good thing, right? And more, more or less, I think that's, that's an interesting new direction that it's taking. I find it's a good re a good thing because it's also, yeah, the responsibility goes to the writer, right? So you can say things that are racist, but it'll be you saying it, it won't be, the AI will actually be you. The AI is just helping you to do it faster and to do more of it.

Alan Tominey:

Yeah. Well that's, yeah. That. That's the thing. It, it, it, it's, it's basically, yeah, I mean, it, it kind of ups the entropy in the room. I mean, you kind of, you can basically have as mu you can create lots of content and that's one of the, the main, you know, the main early applications for these things was kind of just content creation. Just keep, keep making content. But I think in the, the G P T three api, you have these, these things called prompts. So you give it your prompts and then you, you say, okay, these are my, these are, this is what I look like and how I sound and this is what I write. Like, and then you can kind of give it your input and then it should come out with something that would sound and look like you. But you know, that's how you've, you've done it. And I think that's, Um, I, I would hope that that's, that's the future for something like Chap G ppp. I mean, it's as a, as a cool little website on the internet where people can go and like, take little cool things in and get stuff out. That's, that's cute. But the, the application of this has to be some manner of, of API that can be called by other things. You know, it's, um, you know, that, that has to be the, the evolution of it. And, and then it, it then it's the, like you say, and, and like you say, the impetus is on people to adapt to it and people to use it responsibly, obviously. But, um, you know, and, and, but it's not the, it's not the AI's fault when someone uses it as a bad actor. Cuz effectively that person or, you know, per, or, you know, company or corporation was, was going to try and do that. That's been it on their mind. They've just found a tool, you know, so, um, it, it's kinda a, it's a difficult one and, and from an academic point of view, when you're doing, you know, people, people using it to kind of summarize scientific papers to help them write their reports, that's, you know, that's fine. Um, but I, I think it's. It's on the, maybe the, the, the lecturers and the, the tutors to kind of try and find ways of using that, uh, and creating a course that can't be manipulated by that kind of action. You need to, um, people need to, we need to sort of adapt to these things and, you know, br bring the, the fact that this exists into our, into our way of working and our way of living and, and just kind of keep moving, you know? That's kinda how it works.

Johannes Castner:

Yeah, I agree. Well, we have to teach humans how to use it well. Right? And that's, that's one thing, and that's also a moving target, right? It will be changing as we go along. So you can't just make a university course of writing where you include ChatGPT is part of the experience of writing now, but you have to actually update it all the time. And you have to make people aware that they have to update their knowledge on it because it's con cont continuously moving and it's in, it's becoming a bit of a dialogue, right. So we can put, we can tell it to improve itself and it will do that, you know, under certain circumstances. So also, okay. That's, that's great. The other question is, you know, who's gonna read all of this text, right? So now we are able to, every one of us is able to write mountains of books, but is that meant to, for human consumption? Because, you know, we we're not, you know, why, why should we read all of that, um, AI generated, uh, text, right? Is it, is it really valuable? And that, that's, that's, you know, the next question I think that should come., why is that valuable? Why is that not more valuable for it to come from a human experience? It's not embodied. Can it really teach us something new? Can it be creative in the way that will benefit us when we read it? What, what do you think of that? What do you think about these things? Well,

Alan Tominey:

that's, yeah, I mean, that, that's, that's one of the, the, the sort of dangers and, and certainly I think, I honestly, I think economics is probably gonna drive the, the hardest against that because effectively you could, in theory have one of these, um, large language, large language models, just churn novels. You know, you could just say, write me a billion books on any subject you want. And, you know, it could generate terabytes and terabytes of data and, and, you know, you could have it, you know, have it spitting out like the congressional library's worth of books every couple of days. No one's gonna pay for that compute power. First of all, someone at some point's gonna go, where is this? Where is this bill coming from? And then also, you gotta pay for the storage of that. So it's, that's gotta get stored somewhere. And it's gonna be on an Amazon or an Azure or a Google Cloud somewhere. So someone's gonna be paying for that. And I think that, I think the economics are gonna push back against that a little bit, but that is a danger. But storage, you could come back and, yeah. Yeah.

Johannes Castner:

Storage seems to be quite, quite, uh, cheap these days. But yeah, churning out, I mean, compute power I think is more expensive, but then at the same time, Does it cost a lot of compute power to, to do Well, I guess that would be called inference, right? Traditionally we say when, when we use a train model, we call it inference, uh, versus training, right? So we we should, yeah. Because we should make this differentiation ion less.

Alan Tominey:

Yeah, definitely. Yes,

Johannes Castner:

yes. Cause the, the training, we know the training is monstrous when it comes to energy usage. So the common, common critique and, and you're an energy expert, so, so , you have, you, you might have something unique to say about this. Uh, but, but isn't it true that that's actually not that big of a deal because we only train at once in a way, or maybe not once, but relatively infrequently?

Alan Tominey:

Well, but that's the thing. I mean, uh, do we, cause I, I, you know, when you read between the lines and some of the chat g PT stuff, like you say, there's a, there's an intervention. There's kind of continual intervention being done in there. And it looks like there, there's a, there's a sort of a, there's a maintenance aspect to it, so they must be doing something in terms of kind of keeping up to date. And I think, I mean my, my, I was taking it to the extreme where effectively I'm thinking, you know, if, if someone were to leave it on and it just started generating data, you come back in a hundred years time and, and you know, you've got this sort of, the, the, the sum total of human knowledge on a bunch of hard drives. And this much of it was created by actual humans. And then this much of it was just generated by some browser tab that someone left open on chat, g p t, you know, that that's, I think that that's a, that's a extreme example, but I think it's, it's, it's possible cuz like you say, who's gonna consume this content? Because if, if it were to start getting. To the point where you have lots of these kind of bots or, or large language models sitting around the internet somewhere, and they can be communicated with by other things if they start talking to each other and you start getting this feedback leap of content generation that it could very quickly spend very, uh, out of control, you know, and you start getting into this situation where, um, you know, people are moaning about like cryptocurrencies taking a huge amount of energy. And, and they do, I mean, they do take, some of them do take a lot of energy, but, um, you know, nothing compared to like, you know, farming or, or you know, the energy industry for example. But, but if, if these things start generating just content for people who consume content and then they generate content, that, that becomes a, it's a bit of an issue,

Johannes Castner:

but it's also, the content is also just, I mean, it's got to be repetitive, right? Because it's not learning anything new. It's, uh, or it might be, but compared to what it can put out, it, you know, the new things that it learns. If it's human generated, humans can't keep up with it. Right? They can't teach it new things at the same pace as it can keep, you know, parenting par parenting essentially old knowledge. Because what it really is, is a, it's a type of parrot, right? So if it says, if you, if you ask it to tell your story about, you know, a lovely day in London, let's say, you know, it'll say the sky was blue and so on, right? But it won't know, like it doesn't have any experience of a blue sky. It won't even know why the sky is blue as a phrase that humans like. Right. You know, it knows that Humans like to say that when they talk about a nice day, because that's frequently what they talk about . But that's all really it does, right? It's, it's based on these frequencies. Probably, although

Alan Tominey:

you're with London, you're more likely to get. Yeah, some, someone uploads a bunch of Yelp reviews and it's like, oh, miserable. Yeah, yeah, exactly. Exactly. One of the. Well, one of the things I keep trying, I, I've been looking for, I've been trying to sort of like, keep an eye on like, if there's a big scientific discovery, like a, you know, something with a James Web telescope or something sees something or find some new image or a new thing, I'm going to kind of try and keep hitting like chat G p T and ask it the question about it to see when does it get updated, like when is it able to go and get that information and bring it in. So if I keep saying like, you know, when, if we find a new planet or something like that, I'd be like, right, where's the latest planet? And then just see if it, see when it updates, you know, I think that kinda keep testing it, you know.

Johannes Castner:

Yeah. That's a good, that's a good, that's a good side. Yeah. I think, uh, in general, um, do, do you wanna tell us a bit about what you, what you found in your, uh, playing around with it? Because, um, we, we had an offline conversation earlier where you showed me a few things. Maybe it would be great to, to let the, the Sure. Yeah. Viewers on this.

Alan Tominey:

Yeah. Yeah, absolutely. Let me, lemme get a, um, so I found it to be really quite, uh, useful for, let me share, uh, so I think this is the thing that, that I find it quite kinda useful for is the, you know, this is the, this is probably what, where it probably starts, it finds it really easy to generate code so you can say, you know, um, uh, you know, write a, write a, uh, uh, method to slice, uh,

Johannes Castner:

write method to slice, oh, this is for pandas data firm work. Yeah, yeah, yeah. Okay. Yeah, yeah. Great. Yeah, this is a good example. I see. So, so for the listeners you just typed in, uh, write, write a method to slice Panas data frame by named Colin and Python. Okay, great. So, and it spits out, um, coat and very nice. I like the formatting Yeah. Of it. It has a background, you know, black background, so it looks exactly as it would in a, in in various, um, uh, how do you call them? E ides, I suppose.

Alan Tominey:

Yeah. And the beauty of this is as well is that it actually, um, you know, it follows kind of pythonesque, um, protocol. So, so it says, you know, it uses the sort of DF as data frame and PD for pandas and all the kind things that you're working with. It uses underscores for your variable names and stuff like that. So it kind of, it works. Really nicely. And it gives you examples, which is actually, that's really cool. I, I, I've done, um, I did another one, uh, earlier and I, this is all we were talking about where it created a, a function to do that and basically spat out the function for me. Now, this is really useful cause you got as a, as a, you know, software developer and some of the managers, software developers, it, I like the fact that it's commented and it's got the arguments and it's got the return functions and it follows a, follows a, a methodology, which is, is awesome. If everyone wrote like this, it would be so much easier. Life would be so much easier to, to fix bugs, you know, so that's the kind of, that's one of these things that's really useful. Um, the, the other one that I did, uh, as an example was, um, Uh, I, I wrote, I asked it a question. Uh, if it takes an orchestra with 20 people in it, three minutes to play a concerto, how long will it take an orchestra of 30 people to play the same tune? Um, and it, it did a very programmatic thing. It went away and it, it saw numbers and it's obviously gone and, and done a sum with those numbers. So said, basically it's come up saying, assuming all the factors are the same, the tempo and all this kind of stuff. So it's prefaced its answer, uh, and it basically says, right, well, it takes 20 people three minutes, so how long will it take? 30 people? So it goes away and it does the maths. Correctly based on what it's assumed, obviously. And it says now it takes four and a half minutes for 30 people to play that same tune. Oh,

Johannes Castner:

that's weird. Ok, , that's a really weird answer because, uh, because it could have also made it shorter, right? If you, if you ask it, uh, uh, if, if, if we were to say, you know, um, how long will it take, um, 20 brick layers to build a wall, um, how long will it take 30 brick layers to build the same wall? It should be less time, right. Not more time . But then yeah, the orchestra, that's a really funny example., that's a very funny example. So it, it, it didn't understand. Yeah. Yeah.

Alan Tominey:

The thing is this, this actually, I thought I was actually quite generous. No, no. I was gonna say, I thought I was actually being quite generous because I, I've kind of said, you know, this is, it's, it's an orchestra playing a, a concerto or a tune. I, I, I thought I was hoping, and, you know, deepen my soul, the, that it would get it right and be like, whoa, I would recognize that orchestras play songs. Songs have fixed lengths. Yes. You know, that I, I when it actually, so when I, when I saw it kinda spitting out, you know, assuming all the other factors remain constant, like the tempo and the skill level of the musicians. I, I, I was hoping that it was about to tell me it would be the same amount of time, but no . Yes,

Johannes Castner:

of course, of course, of course. But that, yeah, that was proven well. But this, this demonstrates a point that nom Chapsky is making in a way, right. In a, in a visceral way, which is, um, uh, that, that basically we will not learn much about how our brain operates on language or how language works in the human context from such machines, because these machines don't really understand the meaning. Right. They understand semantics in some ways because, so, so it's interesting, you know, semantics, I guess is the study of meaning in language, and so it understands it in the way that it Yeah. In a way that, that you would, if, if, if all that language were, were a sequence of words. So, so, and, and, and, and they can, they can refer to each other so it understands this reference to each other. It understands that blue is an adjective for the sky, but it doesn't understand the meaning of it because it's obviously never been walking around outside and it never saw the sky. It doesn't have any real reference to it. It only has it in as far as language does. So it lives entirely in a world of language, and that's not what humans do, and that's not the point of language. The point of language is to signify something. Yeah. In the, in the,

Alan Tominey:

so, so, so, so what I did, so what I did after I, I, you know, I, I kind of had a bit of a failure in that, that sense. I, I went to, uh, I, I was looking at the kind of scientific paper aspects of it. Cause I've seen a lot of chat about, you know, the science thing and, and I, it comes back to this language thing, like understanding language. Cause the, the, the, the real thing with language is the nuance and kind of how someone says things, inflection and all this kind of stuff. And I was trying to think, okay, how can I look at how well it interprets text? So I, I put, uh, the, the abstract of one a paper I wrote years and years and years ago when I was doing my, my post rat and, and I asked it to summarize the, the, the text for me. And, and it actually, it did, it did pretty well. It does actually rearrange my text or my abstract into some, you know, some usable sense. I could kind of use it to. I could use this to explain the paper to say someone who was, you know, interested in knowing what I wrote, but it, it missed the, the, the nuance of, of everything. It kind of missed the point of the paper. So very often when you write an abstract, you can't say what you want to say because you don't necessarily have rock solid proof for it. But what you're doing, and what I was doing in that particular paper was I was actually sort of saying, look, we've, we made a crystal that sort of showed a certain structure exists, could be the reason why these drugs don't really work that well in this circumstance. And that's what we're trying to infer from the paper. Um, but obviously that the chat PT has no idea that that is what I'm trying to say, you know, because it a doesn't know which, which, you know, journal I published it in and why we did it and what the articles around it were and you know what the context was. So it really. It, it summarized elegantly what was written there, but it missed all the nuance, and, and that's, that's one of the tricks and one of the things that we find as well. I mean, scientific papers can be quite misleading in some cases, certainly in the energy industry. Um, I find in, in industries where there's a lot of money involved because there's an impetu to kind of patent things and keep things a little bit on the dl. So you, you people can't really reproduce what you're doing, you know? Um, in the data science world you don't see it as much cause there's an open source nature to it. But certainly in some of the more. You know, more higher, um, revenue industries. You see things where there's, I'm not gonna say dishonesty, but there's like a certain amount of, of like keeping things out of some of the papers that they publish because they want the credit. You know, there's a grad student who is saying, right, I want to have my name on this paper, or I don't get my PhD, or I, my post-op funding is pulled that I don't publish these things. But they don't wanna let it all out there because that's, that's what they're gonna try and maybe spin off into a little company of their own. So you see things like maybe, you know, formulas or maybe some hidden steps in some algorithms that are not written about in a, in a paper. So if you give one of these papers to chat g p t, it will just, you know, at like wrote, learn what's in that paper and spit it out to you. Perfectly correctly, but it probably won't understand that there's bits missing until you actually try and implement what you see in that paper. You won't understand that there's bits missing. So it kind of comes down to the sort of, it, it's about the actor, it's about the person who's kind of putting this stuff into the, the, um, the, the algorithm or the, the, the model, you know, the person who's trying to do it and also what they're feeding it. So, you know, I, that prompt for me was my abstract. If I feed it another paper, it's gonna give me a scientific paper, you know, summary. But it might not necessarily be what I need to know from that paper, which is that, that's interesting to me as well.

Johannes Castner:

Yeah. Yeah. That's, that's very interesting. When, when. Yeah. Yeah. Yeah, yeah. Go, go, go

Alan Tominey:

on. Sorry. I'm just gonna say, when you come back to Norm John's thing about, about language and, you know, how you learn it, it's, it's, it's kinda the proof part. Because effectively when we, we know if we, what we take what we know about language and we put it, we implement it in one of these models that kind of proves that we were right, but it, it didn't help us work out how to get there, you know? Um, and I think, yeah, you know, it is like, uh, If, if we try to get it to make like a jazz song, you know, a jazz musician's gonna go on about how, you know, they, they're missing different notes and they're making up different notes as they go along. And maybe they're, you know, miss different beats and, you know, it's, you hear that thing with it. So you have to listen to what they're not playing as well as what they are playing. And I, and I wonder how long it's gonna take us to get to that level with these, these models, you know?

Johannes Castner:

Yeah. I mean, is it possible with a large language model of any sort to, to get to that point? I mean, that's also questionable in some ways, right? If it's even possible. And, and so that, that all leads to the question. So we can't learn how language works from them. That's Norm Chomsky's main point. I think, I think his other point is that mm-hmm. maybe that one is not quite as good. And, and that is that, you know, it's just basically high tech plagiarism at this point, which is partially correct. Right. So the, the, the chat PT will, when, when, when you ask it to write something, it will., um, it will, it will contain ideas that are taken from somewhere, right? Yeah. On some, some of the texts that are strained on, but it will not attribute it mm-hmm. to, to those things. So that, that's another issue, um, that potentially could be fixed. Right. So they're talking about fixing that part. So I think that's

Alan Tominey:

possible to fix. Yeah. I mean, yeah. Yeah. They're basically just referencing what, what it's used to give you that Yeah. Yeah. I wonder if they already, I know that they have a Yeah, well, I was gonna say, I wonder if they already, if already does. Cause I know that there's actually quite a lot more output than what it gives you. If you, if you access it like programmatically, it gives you like a lot more dense information than it's kind of just, its answer, it gives you kind of machine understandable information. So maybe, maybe it already does give you referencing and, and just not mm-hmm. showing it. But yeah,

Johannes Castner:

that's, that's, that's possible. You can ask.. Yeah, definitely. Yeah, yeah. Well, in a sense, yeah. But that's why Yeah, it's sorry. But, but when we, when we, when we do it in the college course, we are taught that we have to properly cite our re you know, we have to properly, uh, cite our, um, uh, our, our sources and so on. Right? And, and this is part of university writing. This is what you do. You have to cite your sources. And interestingly, this thing does not cite its sources, which you would think, you know, I guess all of the people who have worked on it, they, they were, went through college at some point and they were probably taught to cite properly, cite their reference, uh, their, their sources. And yet they didn't teach it to do that. So it's kind of actually interesting in it, in and of itself.. But then, so suppose, you know, suppose we can fix that, right? So that that doesn't seem to be an impossible hurdle. We could, you know, I can, I I've tried to play around with it and I ask it, you know, so where do you know this from? What, what, who wrote, wrote this or whatever? And, and it's told me the references. Sometimes it makes up references, which is completely weird and, and ridiculous, right? So it makes up names and say, well, so and so and all at, at all wrote this, and, and then you look for them and you can't find them, and you wonder, you know, who are these people? Uh, it, it just made them up out of whole cloth, really interestingly. Um, , fascinatingly, I don't know why it would, yeah, well, I guess it, that, that is that big large language model part of it, right? So it predicts patterns. And so one pattern would be a citation. So it then creates one, it makes one up, and names kind of sound similar. So here's a name for you. So it makes up a name. It's, it's, it's quite, I mean, I guess that's what you would predict. from a large language model in a way, but then suppose we fix all of this and we get correct references and we have all of that sort of behavior, the next question would be, so what is this good for? Right? So what can it really do that adds something? I mean, aside from producing, so, so I can understand how it's good for a college student who needs to write a paper and wants to do it quickly. I can understand that, but that's sort of aside the point, right? Because that's not why you're writing the paper. You're writing the paper to learn something. And so here you are, as, as Naroski pointed out as well, uh, you know, you're, you're now, um, using this tool to avoid learning , right? And, and you would do that if, if, you know, if the subject is boring and you're not really interested, okay, I can see the point. But if you're interested in the subject, why would he use it? Right? So, so there's that. And then, uh, yeah, yeah, exactly. That sort of question. What is it good for? What can it be used for? That is, that is a genuine application, um, can you make money with it? So that's, that's a question I think a lot of people would be interested in. Um, you know, what, what kind of businesses could you see coming that, uh, could be built on top of it? Is, that's, that's one question I have actually. So, so what, what kind of businesses could you see going forward that we could build on ChatGPT?

Alan Tominey:

Yeah, I think, I think, um, in terms of the businesses, uh, you know, unfortunately, I think the, the ones that'll, the ones that'll be able to answer that will make a business become very, very rich, far more wealthy than me. Um, but, uh, you know, if I could think of a good one, I think in, in general terms, probably ones that are very kind of like person facing. I mean, if you, you could maybe get it to make up a bunch of t-shirt slogans and then start, you know, just get a factory somewhere to make you all these t-shirts and then say, right, these are all chat G P T T-shirts. And, and being the first one to do that will probably make you the wealthiest person doing it. You know? So a whole bunch of things like, you know, make cute things about, uh, you know, my dog and sell it on Etsy. You know, that kind of stuff. I think that, that, there's probably, there's probably small industries around that level. And I've, I've got some cute books from my kids where they, you tell 'em your, tell them your name, their name. You tell them some, a couple of like, handful of little things that they like, like, like the swim, reading, writing, running, jumping. And they'll print you like a little book, a little story book about your kid. You could make a far more dense version of that with chat G p t. You could get it to write a proper novella, you know? So, but these things already exist, you know, that that's already happening. People are already making t-shirts and selling, you know, cute things. They're personalized stuff like that. I, I, I, you know, I think. I think the most useful things I've seen from this are, are, again, already being used. The, the kind of the code thing. I think academia has to really shift itself into being able to, to cope with it. And like you say, if, if you've just got a whole bunch of students there just not engaged in your courses and they're literally just trying to get through to the end of it, it is just spit me out an essay on this topic. Um, you know, I I think if you, if they get unlucky and it prints out an essay that's just gibberish and they get a really low score, then they'll get caught, obviously. Right. But if they get, if they hit this sort of mark well enough, if that person says hit write me a c plus essay in this topic, so they kind of just go under the radar , then that, that's quite, that would be quite, quite interesting as a, as a, as a way forward. Uh, but I think, um, um, what we've, so one of the things that's interesting from my point of view is, is we try and make, um, things like expert systems. So we try and impart knowledge into systems. Um, and that's really tricky because how do you do that? How do I, how do I, um, take someone who has been working in a particular area or a particular field of, of knowledge for say, 30 or 40 years and how do I take their intrinsic little pieces of knowledge they know, okay, that's happening. So if we change that over there, it's gonna correct it. You know, there's in these large kind of complex integrated systems that it takes a long time for someone to get that kind of, of knowledge and ingrained in them. How do I take that and then impart that into some kind of system. And I think ChatGTP's learning algorithm is the thing that's of interest to me in that sense, because I need to be able to kind of turn it somehow into a process behind the scenes that I can then trigger when I see things like that happening. So, I think, um, I, I'm not necessarily as interested in some of the outputs that come out of the chat g p t, but definitely the methodology and, and how it, how it, it consumes extra information and prompts that, that's something that I think will be useful going ahead, you know?

Johannes Castner:

So, so then the, the next question, I, I also had this, so we, we were talking a little bit offline about, um, Galactica, which is, um, uh, a related large network, uh, large, um, uh, large language model that was based precisely or was built precisely to, um, To, to write scientific papers or to help in that process. Maybe not to write the entire paper, but to help you, for example, write the, um, write, write the citation part or the, what is it called? The literature review. Uh, for example, or, or give me all of the, the things that I should cite, uh, on, on some particular topic. Right. So this is put out by meta. Um, this is, um, you know, and, and it was live for about three days, uh, to try out for, for people to try out sort of similarly to, to what ChatGPT is now. You can go and try it out for free. It's completely free. You can interact with them similarly, that was also the case there, but after three days, it was taken down because of a, a barrage of tweets and complaints and yeah. Let's talk about that. What, what, what, how do you, how do you see that fit into the, to the whole ecosystem of AI systems?

Alan Tominey:

Yeah, so I think that's, That's really, that was really interesting. I, I kind of watched that unfold and thought, wow, , because at the same time it, it, it sparked a lot of the discussion that I think people kind of are a lot of the media and, and a lot of people are enjoying in terms of the fear mongering because, you know, when it started to kind of say things that were outrageous or, or just, you know, it was being prompted to, to do things that, and became more and more of a catastrophe. It, uh, a lot of people were, were kind of pointing their finger and wagging and look, see, this will destroy us all, you know? And it's like, well, no. It's just not been given the correct boundary, right? It's just not been provided. The steer that is, is, is done and it, it kind of, it, it brought back, it brings back memories of, of just like, you know, when, when, when I create software and people use it, I, I'm sort of building software and I try and test it as much as possible. And we get, you know, we have a whole raft of different, you know, QCs and QAs that we go through, but then inevitably when it gets on someone's desk and they start spitting numbers into it, it's things that we never even thought about and stuff happens and you're like, wow. First of all, why are you typing that into my software? Cuz that's not right. But then when it starts going wrong, you start thinking, okay, okay, should I, should I handle that? Should I really? Should I be writing my software to be able to, to cope with this nonsense , you know, so it's kind of one of these things, like who, who bears responsibility for it? Because if you were start to start getting Galactical to, you know, just spit out information about these things and it starts saying this incredible stuff, and you're like, wow, why did you ask Well, well,

Johannes Castner:

no, but, but that's, that's very interesting. So, so some of the things that were asked were in that extreme, right? So there is, um, um, one of the, the researchers, yeah. Uh, from, from the, um, I think it was from the, um, max Plank Institute asked it questions that, and this was, um, hold on. This, his name is, I can, I can dig this out here real quick. Michael Black from, uh, who's I think the director, hi, high up person at the Max Plank Institute, ask a question about things that he actually knows a lot about. Yeah. Yeah. So, so technical

Alan Tominey:

questions. and it just, it just lied. It just, yeah.

Johannes Castner:

So the thing is, it, it sounds, but the dangerous part of it, I think that the, the big critique of it was that it sounds very author, uh, authoritative and it in fact, uh, looks for anyone who doesn't know about the topic, it looks like it could be accurate. Right? It looks reasonable. So it doesn't sound so crazy. So when, when you ask it about the history of bears in space, which they did, that's when you know it's unreasonable because we all know that there are no bears in space. But when it comes to topics that, that are pos, where there is some plausibility and that we don't understand, that most of us don't understand what some expert does understand and they can say, and they can ask the questions and they can see that it's wrong, that it's made up, but still authoritative sounding and potentially plausible. That's when I think people got really freaked out. Yeah. But I think actually, so, so don't you think. Here's a, a question that maybe like most people don't ask, uh, but don't you think that it's actually a, a good thing, right? So, so when people say, oh, see here, here we go. Right? This is the end of the world, and this is, you know, and, and, and, and it showed us how it, you know, how it's going to create all this miserable but misery. But, but shouldn't we be really happy to see that it was taken down after three days? So, so this is my, you know what, what I take away from it is, yeah, yeah. It was taken down. It was shut down despite of, um, uh, you know, Yann LeCun didn't wanna take it down, right? He was, uh, he actually treated, are you happy now we shut it down? Yeah. Are you happy now? And, uh, he was defending it all the way to the end. Uh, but, but actually we should be happy because it was corrected for it, it was taken down, and now presumably it'll be improved before it is let out again.

Alan Tominey:

Yeah. And, and I'm sure that they, they will have, you know, they will have garnered a huge amount of really valuable data from even the three days that it was up and, you know, people were using it. Like I say, it will have had to deal with stuff that it would never have expected or we could never have been trained for. And that that will be really useful information. And yeah, it, it's an example of a company acting really responsibly and, and taking it down and they still stand by it. I mean, I saw a post the other day about, um, one of the, the meta guys, uh, and he was, they were still stand buying it and saying like, look, you know, it, we got take it down. But, you know, it, it was, I believe in this technology and stuff like that. That was kind of the gist of what he was posting. So it's, you know, they're still obviously working on it in the background and, and, and that's good. Yeah. Uh, I mean, it's kind of. It's like a, it's like a, maybe no, Chomsky will be very happy that, you know, effectively it looks like these, these large language models have a sort of Dunning Kruger effect of their own. You know, they'll, they just, they, they'll just say things that they don't really know anything about , and, and they'll, they'll, they'll come up. It'll be, it'll be up to us to figure out whether they're,

Johannes Castner:

yeah. But I think that, yeah, his criticism stands, but at the same time, it stands for now. Right? But then at the same time, maybe it stands forever because of, when it comes to language more, uh, really understanding language. It, it can understand languages that can't possibly exist. So as a, this is part of its criticism, right? So we can make up a language that goes against any rules of human language. Uh, you know, uh, sort of, um, it is an impossible human language that can't possibly exist in this human world, and it will just do the same thing as it does with real language, and it won't make any difference. It won't notice any difference from, from the perspective of the large language model. It's basically the same. And so that's, that's, uh, that's something that we will not get over. So, so for sure. But, but you could say that maybe the point of it isn't to teach us about language, right? So you can say, okay, for sure, we cannot learn how human languages work from it, but that might not be its point, right? So we can still make cute t-shirts and make lots of money, and we can still, uh, maybe use it for marketing purposes, which I think would be probably the biggest one, right? So marketing, uh, unfortunately has always been, uh, not very high quality writing in my, in my opinion, right? It's, it's hyperbolic, it's trite. It usually uses. Metaphors that are highly overused. You know, there seems to be no problem with that for people. They seem to still buy it, even if it's use. Uh, you know, for me it tends to, when I, when I read things that are of that nature, uh, trite things that are kind of overused metaphors or even wrongly used metaphors, you know, not quite, not quite the right thing. So I ask it for example, what are, you know, can you write me a hook for a chapter on, uh, on, on algorithms? What are algorithms, right? I just wanted to try it out and it gave me back that algorithms are the tireless workers that are the unsung heroes. Uh, you know, that that's sort of language, and of course they're not unsung heroes, but yeah, it's, it's absurd because we're talking about algorithms all the time. How are they unsung heroes? But, but you know, this language is just something that works well for marketing, right? It seems to work well for marketing. at the moment. Yeah. So maybe for marketing. Yeah. It's, it's a good use case. Why not, right. So,

Alan Tominey:

yeah. Well that, I mean, when they, if they, if they want to try and go for something like volume rather than like quality that, that this kind of thing is, is definitely the, uh, the case. I mean like if you're a copywriter who is just trying to kind of create, you know, little snap snippets of text that go underneath images mm-hmm., then that, that's, that's gonna save your, or probably gonna take your job. Yeah. So that might be the, the, the, the what it's most used for. I mean, like marketing and, and uh, it is such a huge, generates such a huge amount of money for so many companies around the world. I think people probably would be, you know, very surprised when you look at the, the budget that companies like Coca-Cola and, and Microsoft spend on, on simply just marketing their, their products. That's, that's a huge sector. Um, and I mean, I would be worried about how that. How that evolves. If you look at, um, you know, people talk a lot about the sort of Facebook algorithms and the YouTube algorithms Yeah. And kind of putting videos in front of people that are going to rile them up and mm-hmm. be. You know, just, just enough to kind of get someone worried or make someone anxious and just kind of put videos and content in front of people that is not healthy for them, but it will trigger a response. And those responses are geared towards keeping you on the platform or making you engaged or write the comment and get angry and you know Exactly. Wind you up. If you couple those algorithms, something like chat, G p t, that is able to just generate that stuff. Yeah. That's a problem. Yeah. That's a potential issue that that really could be

Johannes Castner:

a problem. Yeah, it's a psychometrics basically, if you learn how to push the right buttons of the people to get them engaged and keep buying and and so on, and then you, yeah, you combine that with a language model, which then uses language. Maybe you'll also combine it with a deep fake on top of that. you can, yeah. I don't know. It is, it's a scary world. It's

Alan Tominey:

a, I mean, you could be very, I mean, you get pretty malicious, you know, I mean, that's one of the things with, um, with, um, with Twitter where they're talking about maybe trying to get rid of the bots, or at least they're telling people they're trying to get rid of all the bots. You know, you get, you do get these bot responses where you're sort of, you know, if, if you. if you say something, uh, about a company or, and that the company will respond, but that's not a person responding. It's a little bot that's triggered to write you a little message based on seeing negative feedback about the company on, on Twitter. Absolutely. And you know, you could get pretty complicated content generated. You could almost have a billion arguments started with every, every Twitter user, just by chat. G p t. Just tell it to, tell it to annoy everyone on Twitter and, and watch it go. So,

Johannes Castner:

Well, no, let's, let's, let's stick with this for a second more. Uh, so, so, so actually when we're talking about Yeah. This kind of thing, then what the next step is when, when, when you, when you realize that, right. So as a human being, once, once a lot of people realize, for example, that faith Facebook is causing harm. To young teenage girls, for example, and other things and, and destroys democracies and so on. Once people learn that, it turns out that that Facebook is now u losing a lot of users, right? So now if you take that logic, um, to, to the next level, you, you might, the, the risk might actually be that the internet itself becomes completely useless, right? So if, if, if I think that the internet is a source of information, if I wanna inform myself, I go to the internet, that's what I currently do. And I think that, um, that, you know, that's what the internet was built for, in fact, right? So for informational purposes, you, you go and, and, and maybe there are some reputable sources that it could still make sense, right? So you can say, forget about most of the internet, but the reputable sources are still good. But now if you can fake that, If you can fake being from the rep, reputable source, suddenly maybe the whole internet stands to not be useful. Then, you know, lose its use value. It's it's original use value of being a source of information. Do, do you worry about that?

Alan Tominey:

Um, yeah. Yeah. I mean that's one of the things a lot of people are, are, are very concerned about and rightly so. I mean, I think that this starts to then look into the discussion about like web three, um, and what trying to sort of democratize it, because then if you can democratize a system like that, then you can try and mitigate as much of a bad actor as possible where you sort of say, right. We're all trying to build a system where it's in there, but the difficulty is that if you have something that can generate this kind of content, there's nothing saying that you can't create a whole bunch of articles and create a couple of deep fake images or videos and then, you know, maybe spit some information into Wikipedia and all of a sudden you've invented some kind of, of thing. Yeah. Well, and, and you know, that, that you didn't, doesn't, didn't actually exist, but could cause

Johannes Castner:

a lot of problems. Yeah, absolutely. And then, then, you know, this also speaks to an episode so a previous episode I should say, um, where we were talking about the democratization of, um, of ai and in that episode, Dorothea Baur explained to me that, you know, actually the problem with democratization is that it cuts out experts. So, so experts are in a way today, the, the, the gatekeepers. Of a way of, of reputable, of good sources of knowledge, right? So we go to the experts to find the good knowledge. Now, if we democratize things, it might become even worse because it's, it becomes much more populist. And everything has, is on an equal footing, including utter garbage. Right? And so now we can produce ar garbage really in large quantities. Then we add democratization. I'm worried that it gets actually worse than that, that, that, that will make it worse depending on how we do the democratization.

Alan Tominey:

Well that's so, so, I mean, Well, that's the thing I, I've certainly seen, I, I have seen something like that happen in, in, in, in the industry that I'm in. So, for example, the way that we avoid that in an, from an academic point of view is, is you have a, you have an impact factor for your, the journals that you're publishing in. And typically you will trust the impact factor journal, the journals with impact factors that are highest. And, and the idea is that you create a kind of, A relationship between the kind of the content that these people are producing and the, the peer review process that they go through and how rigorous they assess the research that they're publishing. And that gives you some level of confidence in what you're, you're seeing when you look at, you know, a journal, um, like a, you can look at some of the German chemical journals and be like, right, I know that, that the people that published this paper went through hell to get it into this journal. Cuz there have been several questions and it'll be a, a process of writing it. And then it'll be really rigorously researched. But, and you can look at some, you know, some of the sort of magazine puff pieces that, and be like, right, I'm not gonna trust that. But then, You know, so you can, but there are some industries where there are bodies that publish kind of, they, what they call scientific research, but there's no peer review process in, in any of their journals. And you can, you basically, you see just the regurgitation of ideas that have already been out there for the sake of people kind of getting a chartership. And it becomes really, really quite, quite soul destroying sometimes. Cuz you're looking at this stuff thinking, God, there's absolutely no rigor in any of this. Um, so, but you can see that it's, it's already happening in certain industries. I mean, the ones that that. Are most rigorous tentative of ones that make the least money, unfortunately. Um, so because they, they have to kind of maintain a certain level of, um, level of rigor to maintain a level of respect. Um, whereas the ones that are just sort of spitting out content in terms of scientific papers are just the ones that, you know, they're just doing it cuz it, it, you know, it gets their name out there and they can put it on their cvs, you know, so typically, you know, so the problem then becomes how do we vet that? How do we make sure that, you know, I don't open the doors and let these large language models like just generate their own peer reviewing and their own, their own content. That that gets, that tricks the user into thinking this is rigorous scientific information., Johannes Castner: it's true. Like you and I, you know, we, we might carefully evaluate the sources of what we read. I do when you do clearly but., you know, many, many people have this and, and there is this, you know, phenomenon that we know that's called, um, uh, what is it called? Confirmation bias, right? So if we have confirmation bias and we're not that rigorous and we just wanna, you know, hear what we really want to hear anyway, and you know, this, this involves voting, right? So now, so, so if it, if it has any impact on democracy, in, in other words, then we are victims of it. Even if we, even if we. You know, vet our information carefully and we look at the sources and we know what is reputable and what is not reputable and so on. N not everyone of our compatriots does that. And so then we end up, you know, actually a large number may not, and then we end up with, how to say, gangsters taking over our political houses, , you know, then, and, and, and aided, aided by chat G p T and deep fakes and so on. This, this is something that I'm worried about, that we we're getting actually worse in this direction and we are allowing ourselves to make more psychologically enticing pieces that are. Basically fake news because we can actually use very rigorous, very peer reviewed methods, to, to fool people. Right. So because yeah, , we, we, we go to the, you know, they, they go to the journals of, of, uh, the, the very well peer reviewed journals of psychometrics and, uh, and, and find out what will make a, you know, make us tick and then give us what we want. Right. What's gonna, what's gonna trigger

Johannes Castner:

people to vote for me. Yeah, exactly. Exactly. So this is a, that's something

Alan Tominey:

that, that, that's definitely, definitely it. It's Scott, uh, I mean, I, I, I then wonder how, you know, you, you get. Um, you know, you've got ha people that generate computer viruses and, you know, maybe make money out of selling the, the, the cure for the computer virus and stuff like that. But you also have this kind of, the, this trouble of the, the ethical hacker where, you know, you'll get people who are genuinely ethically out there to sort of help people and, and, you know, and I think, um, You know, do we have, uh, we have, we must have the ethical data scientist in, in the same time who is maybe looking for ways to recognize that kind of content. You know, I mean, it, it, these are numerical algorithms and all numerical algorithms have signatures of some sort, you know, they right down towards the way they converge and the way that they, they generate information. So is there a way for me to differentiate a deep faked image from a real image based on understanding where they, where they come from? You know, if, if they're coming from something like a, like a generative, um, adversarial network, you know, there's the discriminator and the, um, the actor and do, is there a way of us amping up the discriminator a little bit to try and say, okay. This, this image. Is it real or not? I mean, well, whenever there's a, there's

Johannes Castner:

a, there's a, yeah. Whenever we do that, somebody will find a way to up the generator. There's an arms race. Right? So, so then basically what we're doing is we're taking this little arms race that's going on inside of the model and we're making it more global, right? So now we are having a better generator, we have a better discriminator, and we're gonna end up in this kind of tug of war forever. Right., I cannot see actually any way outta that unfortunately. I wish. Yeah.

Alan Tominey:

But yeah, I mean unfortunately I fear. Fear. That's probably where we're going.. Johannes Castner: Yeah. Like arms races, which then ultimately worrying. Yeah. Then well, so, so my, my interest is then, you know, I, I don't know much about this yet, about solid, uh, I would like to learn more about it. I, it seems to me it, it's right now a little bit of a block, a black box, but the, the future of the internet might be somewhere in that because, uh, you know, sort earners Lee is someone I have a lot of respect for and he's, uh, he's been developing this concept of solid, which I know a little bit about it, just enough to to know Semantic Networks. It's based on Semantic network. And, uh, and, and each node, right? So you are, you, you have information about yourself, you should govern it, right? So you should be, you should be the owner of this node, of this informational node that represents Alan, Tom and I should be, you know, responsible for Johan's customers node. And then what what happens is, uh, you know, there are linkages that are negotiated in between, right? So maybe you and I work together, and then we can say we're colleagues, right? Then we have that link that's negotiated, that is accepted by both of us, right? And so we can, we can do it that way. And, and maybe that will be the future. And who knows, you know, where, where that is going. I, I, I have some hope in that. Uh, that's kind of where I, I see some hope. But with this ChatGPT, I mean, that's just data, right? So that, that is not really, it, it, I don't know how that interacts with how the internet is structured. Actually, that's, that's a very complicated question. Well, so how do you use it?, do you use any large, uh, language model? I've been using it to. Uh, honestly not, not in in any kind of production sense. I, I've been using it to generate, um, some, you know, in, in interesting kind of little pieces of code. I, I've used it to try and like help me build some, like little dashboards and stuff cuz I, I don't know much html, so it's quite useful for that, you know, it's kind of like making, make a little section of, uh, so I've been using it for that. One of the things I've been actually quite interested in is, is like, um, it. I'm trying to get it to, so I'm doing like a, a, a, a separate course on, on brewing and distilling. Um, and I've been trying to get it to, to spit out like, okay, get me a, maybe not, not a recipe for, for like a, the perfect lagger or something like that. Like, try and get me the characteristics of the best I p a, you know, and see what it says. And, and I've just been sort of thinking to myself, okay, can I, can I take that and maybe see if I can make that into a recipe? Because that, that kind of, um, I think that really intrigues me. That's like the like. You know, brewing and distilling is some of the oldest things that, you know, human society has been doing, um, right the way back to the earliest remains of, of, you know, people and, you know, coupling it with something like Chat g PTs, like this sort of cutting edge of, of knowledge That'd be really quite cool to, to bring those two together. So I, I've been trying to see if it'll come up with anything sensible. Um, it's been quite generic in its responses. Apart from that, I think, um, I've been trying to, um, you know, look at building a few kind of like course examples, trying to sort of make it, um, make it b help me. I think, I think you, you mentioned that you were doing something very similar to make it, make it right. A couple of like interesting questions, um, interesting problems to solve as well, to try and kind of create a little bit of, uh, of useful information, but also quite like, I've been using it as a, as sort of generative, uh, sense. So like you can make it kind of spit out lots and lots of like, like names and dates and ages and stuff like that. So I've been using that in some of my, um, some of my work in research to, to try and get it. To sort of spit out information for me that, that I can then use as like, sample data. Um, so just, just literally just to generate content, which is, you know, I think I, I feel like that's kinda what it's gonna be used.

Johannes Castner:

So, so are you worried at all that it might, um, It might give you names that are made up or that are, that belong to people that are not relevant to what you're asking it, that you have to check it.

Alan Tominey:

So yeah, so, so that's actually, you know, that's one of the things that's kind of, kind of concerning. There was a little bit of a, you know, this GDPR is pretty stringent regulations here in Europe, and I, I think one of the things that would be worrying is if it was to come up with like a, like a John Smith 52, you know, with an address and there is actually a person, you know, just purely because that's such a common name and, you know, I, I think that would worry me that it would, it would do that. So, um, I think I've been trying to just, um, Uh, trying to get it to be, I, I'm just using a lot of this stuff for just like testing purposes and, and offline stuff. And, and I, I'd, I'd be very, very hesitant of using it in like, production kinda, uh, level code or production level stuff. It really would be very hesitant about that. So I'm kind of treating it at like arm's length with, with some of that stuff. It's, it's got to be quite, uh, I'd have to be quite convinced that it wasn't, you know, gonna get me in trouble before I, I used any of it certainly. So, um, but again, that, that kind of comes back to, you know, there's, um, like the ethics of it. You have to, I have to, even though I know it's been made up, I think then I have to sort of site like, I got this data from here and, and, you know, somehow record the fact that I've got this data from here. I think, uh, and be very open about it, obviously, that you say like, this is, this came from chat, G P T and, and, or, You know, GP P T three generating this data. You know, I think that, I would hope that that would kind of cover you on most bases, but, you know, who knows? Yeah.. Johannes Castner: Yeah. So then, uh, one, one thing that's maybe, uh, completely different, uh, oh, well, not completely different, but it's, it's related. But, you know, another question that I had this related to what, um, Gary Marcus was writing about in terms of, uh, Nom Chomsky's, you know, he's, he's riffing on, on Nom Chomsky's critique, and that has to do with, um, this idea that John John Locke had. Um, that is that we, that the way that we learn, right? I think that we can put it, you know, this argument actually to rest once and for all. And that is this idea that we have a blank slate at birth. And that actually what we do is we are learning everything in this statistical way, right? So we are, we're taking in information in everything that we know, all of our language and everything. So interestingly, language, Grammar seems to work with very little architecture, right? So, I mean, I guess the, the neural network actually has architecture, right? So it's, it's not a blank slate from the very beginning. It is not. But you could say that, that it is as blank slate like as anything will ever be. And you can see that what happens. The, the shortcomings of it are directly related in some ways to this fact that we're not a blank slate, right? So we are born with, for example, it seems, you know, this is also a norm. I mean, it goes way back to, to Nom Chomsky's arguments about universal grammar. The fact that we always have a certain structure dwelling, which is subject and object and some kind of verb and, and, And that basically, in some ways we're proving this right? That that a blank slate is impossible. Uh, that, that, you know, that a lot of it of our architecture, of our brain or of our, of our cognitive apparatus has to be there at birth. That has to be there from the beginning. Is, isn't that right? Isn't that what you could actually in some ways say G ChatGPT and large language models in general are teaching us. Yeah. I, I, I think so. I mean, I, I don't know how, I think you have to qualify this sort of blank slate idea. I mean, because if you've got a large language model or like a deep learning model that that has, it will have a structure, people will have implemented a certain structure. You know, if you have, if you go back to some of the, uh, like the inception neural network thing that we talked about earlier, that has like a structure. It has convolution layers, and then it's got dropped drops, things that is the structure. It might be, you know, it might, the, the, the actual coefficient in the, the, the, the layers in the model might have started as a blank slate, but there is a structure to it. And the same way, I mean, our brains will have a structure, you know, you'll have people who are born. People who are born to exceptionally talented athletes and have children who become exceptionally talented athletes. There's, there's, there's a nature and a nurture aspect to that because obviously these people are gonna be outrunning all day, every day, and they're gonna take their kids with them, but they're also gonna have the, the body frame for it, the, the right twitching muscle fibers and the, the right metabolism to do that. So there's a, there's a nature and nurture aspect to it. Um, yeah, grammar's a really interesting one. I mean, the, the thing that blew my mind a while ago was the, um, was it the, the order in which you do, uh, adjectives that's really, really important in, in language. And, and actually people do it almost unconsciously, cuz you're never taught it. So you, you say things are you, you do like, size, color, um, you know, so if you say it's a, it's a blue big. Balloon. That sounds weird to everyone, but if you say it's a big blue balloon, everyone's like, okay, that's, you know, if so to an English speaker, when you get the adjective order wrong, it feels, it's like nails down a chalkboard. They, they're like, oh, that sounded weird. But when you get it in the right order, it makes everyone feel comfortable and it makes your, your, your language flows like that. And that's almost completely subconscious. Nobody really gets it. Nobody's taught that in school, you know, so,

Johannes Castner:

so, so I guess, I guess even, even John Locke didn't probably think that there was a blank slate in the sense that there was no structure to the brain, or that there was no physical structure. but I think the argument was more like that we know absolutely nothing at the beginning. Right? So we have that, that I guess the architecture is just there to, to teach so that we can learn, right? That we have the ability to learn, but that everything that we know is actually learned. And I guess that's exactly where Nom Chomsky says, no, that's not true. Um, because not everything can be learned. Whereas Jet BT can learn anything, right? It can learn language that says balloon blue, actually in French that would work.. There are some language, some human languages to be fair where that works. But there are examples of things that can't, that, you know, I guess Nom Choky is written about it. I, I don't know the examples right now, but there are some things that can't actually be part of a human language. Yeah. Yes. Whereas, um, the structure of the neural network, it does have a structure in terms of, you know, it does have some convolutional LA layers and some other things that are programmed in, but it doesn't have knowledge program. Yeah, be fair. Right?

Alan Tominey:

So we're that are not possible from a human point of view. Yeah,

Johannes Castner:

yeah, yeah, exactly. So I think we are learning here that we, that we do have to have as humans where we distinguish ourselves, I guess, from these things so that they're relatively speaking a blank slate in the sense that they know nothing. They have no idea about language, they have no idea about spitting out anything before they're trained. Right? So they have to be trained. But with us, uh, uh, that's unlikely. Right. So in, in this exchange with or in, in learning about chat G t P, wouldn't you say that we are learning this difference between us and, you know, and these models and that there is some value, therefore because of.

Alan Tominey:

Yeah, definitely. And, and that's, that's kind of one of the exciting parts of it. I mean, um, you know, language, humans will have language evolved along with humans, you know, so it's interesting that in, in lots of cultures and lots of languages around the world, um, you know, some of the, some of the basic sounds that babies use when they first begin, like, man da, it, that's, that suddenly becomes mom and dad, you know, and, or, or man pa mattar pat and Latin and all this kind of stuff. So there's like, um, these things must have evolved along the way, but then when you get to things that, you know, maybe can't possibly be in the human lexicon, um, that's right on the edge of, of what's possible from a human language, um, that's kinda where innovation starts. Because if you think about, you know, Einstein's leap was construing time and space together in, into space time. And that was something that people didn't really. Think about. And, and he, he took the leap and thought, right, there's gotta be something else, something weird and not intuitive, but he went there and that's where the innovation comes from and, and things like, um, uh, Like quantum mechanics, it doesn't make any sense. you know, people that, I think it was um, uh, feineman that said that anyone that claims to understand quantum mechanics doesn't understand quantum mechanics. Is that the sort of, or someone, someone like that. But basically it's, it's, it's weird, it's not intuitive. It doesn't make any kind of physical sense to the sort of real bodied person, but it, it, it works from a mathematical point of view and is predictive. So it, it is an innovation that has kind of come along and someone made the leap. So, you know, can, can something like a chat G P T model that, that can do things and learn languages that are not possible from a human point of view, actually add value because it can come up with something that we couldn't have come up with at the, the time. You know, maybe that could be interesting. Yeah, that's

Johannes Castner:

a really interesting question. So that, that's actually standing, that whole Nom Chomsky argument on its head and I think that's really interesting to do so to say. Because this thing, whatever it is, it's, I don't know, to call it an intelligence, I guess it's, it's a type of intelligence, you know, I, I wouldn't say it's as intelligent or, you know, I wouldn't, I wouldn't go in, in comparison, but maybe because it's so different in its nature and it can speak our language, , therefore it can give, it can bring us back something that we couldn't possibly think of. And so, yeah. That's a very interesting, that's a really interesting argument and I have not heard it out there anywhere. So that, that's, um, I think that's a really, that's a really good astute, um, point. And so similar to a bat right, A bat. There's this famous article, um, on consciousness that, that, uh, you know, imagines where, where, where, when you read it, you imagine being a bat. and it's very interesting cause they have a completely different visual cortex, right? They, they don't see in that same way that they have sonar communications and so on. And just by, in that nature of being so different, it can teach us about a part of the world that we cannot perceive, that we can't possibly think of because we're trapped in our own architecture, right? In a way. And so maybe this, not having a blank slate is, uh, is, you know, traps us in a particular kind of, it has a certain limitation to us that maybe we can overcome with tools like ChatGPT. Yeah. And, and you know, where would that bring, bring us?

Alan Tominey:

Yeah, I was gonna say that in you get insects that can like, see, see different colors of this, this spectrum and like plants look very different, uh, under the sort of UV edge of the spectrum. And uh, then we see them and, and there's, there's a real, you know, I think there's, there's potential advantage there for having something that can, can relate back to us in, in language we can understand, but can maybe think just a little bit outside the box in a sense that we can, you know, quantum computing can bring that in as well. I mean like this whole thing of having, you know, these sort of qubits and how you program with that and how you, how you utilize that to its fullest. If we ever get to the point where we develop these machines that can do it, You know that that's gonna bring a whole paradigm shift that your kind of, your standard, you know, computer science person that's used to working with, you know, these little bits might not necessarily be able to properly grasp right away, but as soon as, I'm sure, as soon as it becomes a thing and is thought about, people will start learning about it and, and understand it and develop it in their own way, you know, so, yeah. Can I, can I add value by just being able to just be a little bit different to us?

Johannes Castner:

Yeah. Can it extend our thinking? And, and, and so then that would be beyond perception, right? So right now we are, we are talked about, you know, bets and maybe frogs and insects. They, they have different perceptions, but we don't know or we don't think that they have different types of logic and different types of reasoning. But that might be true of this chat G P T, right? Because it can speak in languages that are impossible. It might be able to reason in ways that are impossible to us. Once it knows how to reason, it knows a little bit of reasoning. Obviously, when you, when you write a program there, that is actually beyond just predicting each word, right? It has to work, the program has to work in, for it to work. There has to be some amount of reasoning underneath this. Underneath the hood, right? Under this chat, G P T, right? Which probably isn't part of the normal, like about the previous, the previous language models, large language models., they didn't have reasoning. Right? And maybe that's the crux of what differentiates chat g p t from, from the previous generations of such models is that they, they could pretend to reason, but clearly if you can write a program, you can reason right there, there must be some level of reasoning you can do it. Do you agree with that?

Alan Tominey:

Yeah. Yeah, I mean at the, at the, I mean at the mathematical level, when you, when you kind of program, you know, you see something in a paper that, that has a, an integral then, you know, from a programmer point of view that literally is just sum of, you just make a little increment and sum it all up. That's your integral, um, I think the, you know, reason reasoning from the, you know, from a chat gtp G P T is, is either just like a switch statement or an if statement. There's, there's conditions somewhere and, and there is decisions being made, you know, or a, or a big complex random forest or something like that. You know, somewhere along the line there are decisions being made and, and there is reasoning imparted into it. So, yeah, if it can be truly, I dunno if it can truly be considered reasoning without the, the ethics behind it. Cuz ultimately a decision has to be made and there's kind of like a cost benefit. Um, you know, and that's a pretty simple sum, but the concepts behind the cost and benefit. are not simple. You know, there's quite complex kind of things to be taken into consideration at the base level, you know, so it's quite that. Yeah. So the definitely, I think reasoning is not beyond these things, but it certainly could be quite complex when we get down to it, you know?

Johannes Castner:

But if you program using, say, recursion or something like that, or functional programming or something like that, can you do that? And, and if it can, isn't that quite a sophisticated type of reasoning to be able to Yeah. Write recursive algorithms, for example?

Alan Tominey:

Yeah, definitely. I mean, if you, if you, uh, Yeah. Yeah. I think certainly when, if you start, you know, if you approach it at like a functional level and you have the ability to generate, because the, you know, we, it, we've seen the chat g p t can generate code. So can it generate its own code? Can it generate its own functional out, uh, programs? And then effectively it might just be a large Lambda engine that can just basically, you know, pass things that it's generated itself around as, as parts of the arguments. I mean, maybe there's, it could possibly be construed as being reasoning , you know, cuz it, it just basically has the ability to, on the fly, adapt to a, to a problem and generate the thing that it needs to understand, you know, um, that yeah,

Johannes Castner:

it also understands what you say, right? So you say, I wanna function that does this, then it will kind of produce the function that you say it will. So it is not just parroting, right? So if you ask a parrot. You know, the, the, the famous example of the sky is blue, right? So it can say, you know, if you say the sky is, and the parrot will say blue because it hurt many times that that goes together with the, you know, with the sky is. Um, but, but in this case, if you ask it to say, well, make me a function that takes such and such inputs and produces such and such output, it's got to do something other than just scrape GitHub and look at such functions, whether they have already been written or what do you think of that? I mean, is this just a regurgitation of previous code in some ways?

Alan Tominey:

No, I don't think, don't think it's doing that. I can't. I, I think, I think, um, yeah, like it, it, if it just learned the, if it just did the, the, the NLP processing for a big bunch of GitHub code and, and spits out the thing, I, I really, I don't think it could be doing that. I, I think the, the, the way that it structures its replies and it's. It's far too complicated for that. I mean, especially putting comments in the code and, and using all the paradigms that are, are correct for Python language or even like using Camel case or, you know, I, I think it, especially if you give it, well that's a pattern condition, but that's a pattern. But if you give it conditions, if you tell it to sort of the, the thing that I think you mentioned, it keeps using the, this poem, you know, the sort of unsung heroes. But you can give it a constraint as well and say, don't use that phrase and, and hope it never uses it. So, yeah. So that can, cuz then,

Johannes Castner:

cuz then I tried that in this particular case. This particular work, it couldn't get past the unsung hero . No, but, but, uh, I mean I tried it again in, in different ways. So maybe, maybe it's also improving over time. So I'm not quite sure. So I tried it again more recently and it didn't do it like that. So it does avoid phrases I said to avoid. But in the beginning, this unsung hero thing, it, it, it would've like, so if, if it had that in the beginning of the, the hook say, uh, or the, the little paragraph, then it would move it to the ba to the end of it or something. It would move it around, but it would never get rid of it. Right. So that's, um, that was interesting. But I think it, it's that that might not be true anymore. So it's evolving, it's difficult to, to pin this down. It might have been a problem that's no longer a problem. It's possible. So that's very interesting. This coding part. You know, the, the coding part is interesting cuz it's difficult. It's a task that's difficult for humans, certainly. Quite tricky, quite easy for people, difficulty learning

Alan Tominey:

what, but it looks like it's got interpreters. Cause it does look like it isn't actually like running the code that it, it writes out in some instances, certainly with Python, it looks like it's actually running it and printing it. So, I mean, it looks very, it looks maybe they just got that off of, off of gi, you know, the GitHub or something like that. It's just the, the codes,

Johannes Castner:

right. Yeah. I like it. Well it's interesting you can combine, yeah. So you can make it modular, right? So you can let it use some of these tools and stuff that you have and some of the others and, and it's, it seems to be quite modular. It seems to not just be a one type thing, like a large language model. And so this goes then in the direction of a more general type of ai, right? And so I guess that's also what, what people are claiming that, that, you know, agi, this is one of the steps. This, this chat, G p t and large language models are a step in the direction of ar uh, artificial general intelligence. This concept of of, of having a general intelligence, which is kind of what we, what we see ourselves as, right? We are a general, we can drive and we can cook, and we can have a conversation and we can do all of this at the same time even. Right? And that's what what is in, in some ways, that's seen as a, as an ultimate goal of artificial intelligence, right? And some, some people see it as a, as the ultimate goal of artificial intelligence is to create some thing that can do all of these things at once. But basically what it can be is just simply all of these applications put together into one, like modular mo, modular parts, and then then basically assembled into one. And, and that then is general intelligence, I guess, if it can pass between these, and know that it, yeah, but it doesn't need

Alan Tominey:

consciousness for that.. But that, but that's the thing as well. Because, because then, then you get into the sort of the, the quandary of do you want it to have its own self-interest? And, and what would that then be? Because then it can be motivated to the, the, sometimes when, when these things spit out, like just waffle. It, it's, you know, from, from my point of view when I look at it, I think it's probably just not trained to, to output a nu. You know, like, it, it simply just can't output a zero cuz there's, it can't regress down to that value, maybe, or, or, or it's not intelligent enough. Sometimes when you, you talk to, you learn about negotiations and you learn about talking, uh, in meetings and some sales and stuff like that. There, there's a power in, in a pause. You, you can, like, if someone says something, you don't immediately have to react to it. You can pause and just let whatever has just been said, hang in the air, like the stale thought that it, you want it to be. And you can kind of direct the conversation by using that kind of process. The, and, but this, this, you know, these, these models clearly don't have that ability because they are simply churning to output something and, and they, so you would need to kind of teach it some sort of motivation or self-interest or goal objective function. to try and get it to want to do that. So what, what does it want to do in order to be able to do that? So does it want to convince you that it is human or is a real person? How do you, how do you put that into the, the model? You know,

Johannes Castner:

that's a very dicey topic because, so, so we, you know, as humans, again, you know that we, we, we, we evolve the purpose, right? We have, because we have children, we have to eat, we, we have to do things to maintain our body. These things are not really embodied even. They could essentially, they could be embodied in the form of robots, I suppose, but, um, but maybe that's not the most pressing limitation. The most, the, the things that, that seems to me is if we, if we have it, have its own purpose, that purpose is somewhat arbitrary because we have real reasons to have. Purpose, right? We, we, we have to raise children. We have to spend money on rent and uh, whatnot. We have to have resources to survive These, these, um, algorithms, they don't feel pain, they don't feel, uh, and, and I don't see, I don't see a commercial reason of why we would do it. Do you, do you see that, do you see a commercial reason of giving AI its own purpose?

Alan Tominey:

Not really only, I mean, the, the only purpose I can kind of, you know, conjecture it would be that you try and get it to encourage its own use. You know, so I would hope that that would motivate higher quality output and it would maybe recognize the, you know, the, the need to, to do well and, and actually have people want to use it, that kind of thing. But then the issue becomes, you can, you can very quickly see it spiraling towards these algorithms that people don't like, which are the ones who try and get you to click on the link and the ones that try and get you to stay on the platform and, and rile you up. So that's the issues. How, how do we make it, how do we point it to the good of, of humanity, you know, rather than it, it sort of figuring out that it's far better off, annoying. All of us to the point where we all start typing messages into it, you know?

Johannes Castner:

No, because eventually we'll shut it off. Right? We'll shut off the whole, we have to, I mean eventually we, we, but, but then also, yeah. I mean this is, this is a runaway problem, but some people say that we should give it our goals as its intrinsic goals that we should basically give it our own purposes and, and in, you know, give it as if it was its own sort of transfer our sense of purpose of our needs and wants to the algorithm. I think that's an interesting idea.

Alan Tominey:

It, it, it, yeah, it is. But then, then there's a real, there's a real anthropological problem there. How, how do you, you know, across different cultures, they have different goals and different, you know, different things. You know, they, there's the, the, or capitalist versus communist, you know, there's like, uh, do better for all of us or do better for you. Cuz that'll eventually trickle down to everyone else. Like there, there's a real, that's a real. Sociological question that you've gotta answer there. How do we Whose goals?. Johannes Castner: Yeah, exactly. But, so then, then what, what, what G B T is writing about it, you know, there, there's this, um, hold on. There's, um, a text here that they wrote about it and that goes in this direction. So they're saying that they, uh, here, I'm, I'm gonna read this here. Define our AI's values with broad bounds. We believe that AI should be a useful tool for individual people, and that's customizable by each user. So regard, regardless of, of their culture. Basically up to limits, defined by now here comes the weird word, society, So up, up to limits defined by society. Then the question becomes what society, as you just alluded to. And, you know, we, I think, uh, yeah, what are the limits? So then if, if we say that, then we can say, okay, every individual can do it on their. Um, this is where they're going. This is where chat BT is now actively going, right? This is not just the theoretical. Um, they're saying they want, they wanna do this, but they, they will still have to define the society if they want to have limits defined. So, so, so as to say the purpose will be by, will be that of humans, of each individual user, in fact. But there will be a limitation placed by society, whatever that is. Society they're talking about.. Alan Tominey: Well, yeah, exactly. But I, I, when, so when Apple puts Siri on, on their, their phones, a lot of people were like, I just fluff, you know, who cares? But, but people use Siri now a lot more. It becomes a, you know, people are just, you know, it's just a throwaway thing. You, you set a little cooking timer for something you stuck in the oven, you know, tell your phone to, to remind you for that kind of thing. Um, you know, and Alexa is the same and, and Google these little little dots around your house. So, you know, if, if you couple, couple the, these deep or the large models with some kind of easy interface for people to, to work with it, it can become very, very, uh, you know, very useful. Could it help maybe, you know, could you put it into like, uh, like homes for old people and, and have, have it talk to old people about things that happened a long time ago. You know, there's therapy in, in, uh, in. Like old folks homes where they, they, they sing to them and the, the, the songs of their youth bring out memories and, and, and, you know, really help to sort of wake them up and bring them, you know, uh, to a higher level of consciousness than they usually are if they're, like, if they have dementia, these things. So can we put chat g p t to work there and have it, have it talk to people and kind of like reminisce with them as, as maybe like a sort of a therapy, you know, a small therapy session. I mean, that, that could be quite, you know, I, I imagine that'd be quite useful for, for a lot of people if they could kind of like just keep, keep sort of talking to people and, and, and maybe not make things, not

Johannes Castner:

especially in societies where Yeah, exactly. So, so especially I, I like that use case and, and.. I do think it's particularly applicable in societies that are aging where young people aren't around to talk to, such as Japan. Um, you know, that I think that would be quite useful in such, in such societies. Otherwise, I would think, yeah, I mean maybe we should have humans talk to them, but then again, it's costs of course, involved in that people have to, yeah.

Alan Tominey:

But, but I think, I mean, it is kind of a, uh, it certainly would help if there was some sort of way of, because these things that, you know, just been shown to like reduce stress hormone levels and keep people calmer and stuff like that. If it, if it's something that can switch on and alarm someone, you know, maybe even, you know, see, recognize through some sort of image recognition that a person's getting agitated, it could switch on some kind of system that starts to try and calm them down and alerts a nurse to come and talk to them. You know, the, there's, you know, you could, you, you certainly don't want all these people sat strapped to a chair with a, with a robot talking to them all day. I mean, that would, that's, that's, that's a dystopia right there. ly,

Johannes Castner:

yeah. Terrible dystopia. Absolutely. But loneliness is a dystopia too, and maybe if it can be somewhat Yeah, it's really, it's a tricky, it's a tricky question. Yeah, that's, well, so I guess we, we have to cut it at this point because, um, of the, the, the time limitations there is, at some point I have to stop it cause of the, you know, generally I'll see it as a, a sort of commute time, you know, um, computer time. But, but, but I, I do wanna give you a chance to say something to the audience and, you know, something that you want them to take away. Uh, and also where, where they can keep up with you and where they can stay connected with you if they, if they seek to do that. If, if you could do that,

Alan Tominey:

Sure. Yeah. I mean, uh, I, I think one of the things that, uh, I'm constantly kind of, when I'm talking to people about these things, there's a, there's a lot of concern, a lot of, like, how do we stop it from, and, and I really feel like one of these things with like technology is, is you don't stop it. It, it's out of the bag. It, it's happened and what we have to do is kind of just kind of cope with it, but also like learn to adapt along with it. I think a lot of the things that people kind of struggle with is the failure to adapt to, to new ideas and new things. I think some of the things we talked about just, you know, today is they're interesting ideas and if they can help and, and they happen, then, then great. You know, someone, they, they'll, they'll benefit someone, you know, uh, you know, anyone that wants to get in touch with me, I, I'm, I'm on the best way to do it would be LinkedIn. Uh, I'm not really, um, massively socially mediad, but, um, you know, if, if they look for Alan Tom on LinkedIn and gimme a shout, I'm happy. I love talking about this stuff. So, you know. Yeah. Happy to talk about it anymore This show is published every Wednesday at 5:00 AM on the East Coast, 2:00 AM on

the West Coast, and 10:

00 AM in London. If you haven't done so, please subscribe so that you don't miss any of the shows, and give us a thumbs up on videos you enjoy and a thumbs down on videos you don't enjoy so much. And please tell us why you enjoy something and why you don't enjoy something. To let us know what we should produce more often. Next week I will be meeting with Utpal Chakraborty and we will be talking about one of the most esoteric areas that, intersect with technology, quantum consciousness.

Utpal Chakraborty:

Uh, in fact, I have spent a lot of time, uh, doing some, uh, research on, uh, this topic how quantum, uh, can be, uh, uh, uh, can have some kind of a relationship, uh, with our consciousness.