What ChatGPT Could Mean for the Future of Artificial Intelligence

190 views

2 years ago

0

Save

Bret Weinstein

10 appearances

Dr. Bret Weinstein is an evolutionary biologist, podcaster, and author. He co-wrote "A Hunter-Gatherer's Guide to the 21st Century: Evolution and the Challenges of Modern Life" with his wife, Dr. Heather Heying, who is also a biologist. They both host the podcast "The DarkHorse Podcast."Rescue the Republic is a non-ideological, post-partisan gathering of the The Unity Movement where we will declare our commitment to defend the West and the values that form the foundation of a free and open society. https://www.jointheresistance.org/ www.bretweinstein.net

Comments

Write a comment...

Related

Transcript

But I want to talk about chat GPT. Mmm. Fascinating question. Yeah. Have you experimented with it at all? I have not, but someone, the gentleman who runs the JRE companion page made a rap with chat GPT. Like was it if Kanye West wrote a rap for chat GPT? They put it on Instagram, but it seems like a person saying it. You want to try it? No. We could try anything you want. I mean it takes a long time. His thing took like 48 minutes to do. Well, whatever you want to look up right now, we can do it. The problem is you have to cajole it. It'll get something wrong and you have to say no, not that bad. But let's just explain what it is. Chat GPT is a large language model trained artificial intelligence, which is, let's just say, it can be awful, but it is often surprisingly good at answering questions you might have about how to do things. One of the great triumphs of it is that coders are now asking it to solve coding problems, and it will actually write code that is functional. It's pretty amazing. And also there's an implementation of it that if you feed it up to three tweets, it will write a New York Times story in one of five genres, you know, optimistic, pessimistic, neutral. And you know, you don't really need the New York Times anymore because it's pretty good at this job, right? So on the one hand, it's all very interesting that we're living in an era in which there is at least, I mean, you know, and this is a prototype, right? This is a prototype that was specifically trained and then placed on the internet so people could play with it. And I've seen lots of interesting uses. It's going to get better, right? We're dealing with chat GPT-3. There's going to be a chat GPT-4, which is going to be that much better because it will be built with the improvements that have been gained through turning this one loose on the world. So I have to say, I am quite alarmed, not only that this thing exists, but I don't think we're ready for it. And I don't think we're ready for it in a couple different ways. I mean, if you want to comfort yourself and say, well, this isn't that serious that we have this AI that can do these really shocking things, the comforting thing is that the way it's programmed, it doesn't know what it's saying. It doesn't matter that it convinces you that it's saying something and it means it and, you know, that it seems like a creative entity. What it's doing is it is basically using a predictive model that has been trained on a huge data set of written language, right? So the answer is if, you know, you take three words in a row, can you predict what the next word is going to be? And they've allowed it, they've exposed it to a large data set and it's gotten really good at predicting basically these sequences to the point that it can now, if you're prompt it correctly, it can spit out these very long explanations. Some of them are dead wrong, sometimes they're right on target. But I have two concerns about it. One, if you imagine that this thing just gets a little better than it is, which is inevitable, it's going to make actual insight that much harder to spot, right? In other words, if you become expert at operating this thing, at querying it, and it becomes better at understanding a wider range of topics because they turn it loose on everything that's written on the internet, for example, right? Then the point is the ability to fake expertise is going to go through the roof. I don't think we know how we're going to police a world in which, I mean, this problem's already bad enough. Most academics are fakers. They don't know that, right? They trained in something, they wrote a dissertation, they think they're experts, but you can see when something unexpected happens, like the pandemic, you get just broad scale failure across entire disciplines where nobody seems to get it right, right? In that world, this is going to be even worse because now you have an artificial intelligence able to generate things in plain English that are often full of true information, but you don't know whether what generated it is some brain dead model or something else. That's one concern. And the other concern is when we say, well, chat GPT doesn't know what it's saying. It's not conscious. We know it's not conscious because it's not programmed to have a consciousness. We are actually ignoring the other half of the story, which is that we don't know how human consciousness works, and we don't know how it develops in a child, right? A child is exposed to a world of adults talking around them, and the child experiments first with phonemes and then words and then clusters of words and then sentences. And by doing something that isn't all that far from what chat GPT is doing, it ends up becoming a conscious individual. And so I think it's clear that chat GPT isn't conscious. It couldn't be. But it isn't clear to me at least that we are not suddenly stepping onto a process that produces that very quickly without us even necessarily knowing it. And what steps, if any, can be done to mitigate that at this point? Well, it's interesting. I wrote a paper, which I never published anywhere in 2016 about this very issue. In fact, I used basically the argument that you could attain artificial general intelligence by imbuing computers with a childlike play environment for language and then exposing them to a huge dataset, which is not exactly what's happened here, but it's in the ballpark. And I would argue, and I did argue, that one needs to build an architecture in which this can't get away from you. And so the architecture that I advocate for is actually a metamorphosis architecture where metamorphosis is not allowed. It is an affirmative choice of humans. So in other words, if you think about, let's say that we developed some artificial frogs to do some job to clear some waterway of something, and we imbued them with an intelligence so that they could learn to clear the waterway better, but we worried that they might learn to do something that we don't want them to do and that we would have no way of arresting it once these frogs were released in the wild and capable of producing more of themselves. But if what you say is, well, at the point at which you go from a tadpole to a frog, you have to ask us if you can go. There's no automatic transition from a tadpole to a frog. There are still dangers. In the case of GPT chat, I think some of the artificial intelligence existential risk folks would tell you that one of the dangers is that the chat AI could convince you to do its bidding. As you said, when you were looking at this, it felt like a person. And the point is something that feels like a person can play on your emotions. Can that be used to cause a fail safe to be removed? Maybe. But in any case, this only deals with one of the two issues I'm raising, the question of actual artificial general intelligence arriving, us not knowing necessarily that it has. That's a frightening prospect. And in fact, I have a little thought experiment that might reveal why. But the other issue is the issue of competence. In a world where you basically have a Cyrano-Diburgereic dystopia where everybody is using this thing behind the scenes in order to say things that are beyond their own capacity to articulate, then the world becomes some new kind of hall of mirrors. We've had a hard enough time dealing with algorithms on search and feed. This is a whole next level of difficulty in knowing where you are and who you're talking to and what it means and what their motives are. I think we ought to be on high alert. When you extrapolate, when you look at what this does and what it's capable of, and what I think what scares people is something that seems to be a person but doesn't have any emotion, doesn't have any soul. It's not us, but it behaves exactly as us. And then you can put it in a physical entity. So if you have this chat GPT and then you extrapolate to version 5, 6, 7, 8, 9, 10, and then there's a physical thing that has this ability inside of it to communicate with you like Ex Machina, where it's exhibiting all of the behavior characteristics of a person. Like one of the most terrifying things, that's one of my favorite movies of all time. I love that movie. One of my favorite movies of all time was when that guy who was brought in to sort of run some tests on these artificial intelligence creations and determine whether or not they pass as human. What is that test called again? The Turing test. And he is in love with this woman. She's manipulated him to the point where he's aided her in her escape and then she leaves him in that room with the bulletproof glass and he's pounding on the glass and she walks away with not a thought at all about him. It is the ultimate example of the worst case scenario of where this can go, where you have something that behaves exactly like a human being and knows how to play upon your sexual urges, your emotional desires, all of those different things that she plays upon. And then she just walks away from him and leaves him to starve to death in this fucking bulletproof room. Yeah. And I'm now recalling the film and it's done very well because it manipulates you passively in your seat as it manipulates this character on the screen. And so you are betrayed too in this. You want to believe that it's emotional and it has none of that. You feel bad for this woman, what you think is a woman who is contained. And when she's saying to him when the power goes out, don't trust him. And then the power comes back on. She behaves normally again. You're like, oh my God, she's trapped. This poor creature, they've made a person essentially. And she has these thoughts and hopes and dreams just like a regular person, but now she's trapped and he falls in love with her. And even though he's seen her in her robot form, when she puts skin on and when she puts clothes on and she's in front of him, he's in love with her. Right. And again, it plays very well because there's a manipulable circuit in straight men that's going to react to this. I'm sure straight women, I'm sure gay women, gay men. Gay women, no doubt. Everybody. Well, yes, but in this particular narrative, you'd have to be a gay woman or a straight man for it to trigger you. But A, part of this is actually inevitable in the chat GPT story because especially to the extent that this is a mindless entity that doesn't know what it's doing, it's just striving to do it better. The tactics that work will register as, oh, you did it right. So to the extent that you have those vulnerabilities in you and it finds them and that works, then the point is reinforcement. It scares us because it's not us, but it is us. It's behaving exactly like us, but it doesn't have all the things that make a person a person. It doesn't have the biological vulnerabilities. It doesn't have the ability to actually sexually reproduce. It doesn't have emotions. It doesn't have all these different things that we like to think of, the soul, whatever that means, whatever that term actually means. But I'm worried about what could be generated. And I know that that sounds, it will sound to a lot of people, especially technological people like a biologist out of his depth. But I don't think so. This is a biologist trying to say something about the biology and what it applies about this analogous system.