Joe Rogan & Lex Fridman - The Short Term Threats of Artificial Intelligence

59 views

6 years ago

0

Save

Lex Fridman

9 appearances

Lex Fridman is a scientist and researcher in the fields of artificial intelligence and autonomous vehicles and host of "The Lex Fridman Podcast." www.lexfridman.com

Comments

Write a comment...

Transcript

But to get back to artificial intelligence, so the idea is that there's two camps. There's one camp that thinks that the exponential increase in technology and that once artificial intelligence becomes sentient, it could eventually improve upon its own design and literally become a god in a short amount of time. And then there's the other school of thought that thinks that is so far outside of the realm of what is possible today that even the speculation of this eventually taking place is kind of ludicrous to imagine. Right, exactly. And the balance needs to be struck because I think I'd like to talk about sort of the short term threats that are there. And it's really important to think about. But the long term threats, if they come to fruition, will overpower everything, right? That's really important to think about. What happens is if you think too much about the encroaching doom of humanity, there's some aspect to it that is paralyzing, where it turns you off from actually thinking about these ideas. There's something so appealing. It's like a black hole that pulls you in. And if you notice folks like Sam Harris and so on spend a large amount of the time talking about the negative stuff about something that's far away, not to say it's not wrong to talk about it, but they spend very little time about the potential positive impacts in the near term and also the negative impacts in the near term. So let's go over those. Yep, fairness. So the more and more we put decisions about our lives into the hands of artificial intelligence systems, whether you get a loan or in autonomous vehicle context or in terms of recommending jobs for you on LinkedIn or all these kinds of things, the idea of fairness becomes a bias in these machine learning systems, becomes a really big threat. Because the way current artificial intelligence systems function is they train on data. So there's no way for them to somehow gain a greater intelligence than the data we provide them with. So we provide them with actual data. And so they carry over, if we're not careful, the biases in that data, the discrimination that's inherent in our current society as represented by the data. So they'll just carry that forward. I guess so. So there's people working in this more so to show really the negative impacts in terms of getting a loan or whether to say whether this particular human being should be convicted or not of a crime. There's ideas there that can carry, you know, in our criminal system there's discrimination. And if you use data from that criminal system to then assist the ciders, judges, juries, lawyers in making this incommunating, in making a decision of what kind of penalty a person gets, they're going to carry that forward. So you mean like racial economic biases? Racial economic, yeah. Geographical. And that's a sort of, I don't say that exact problem, but it's, you're aware of it because of the tools we're using. It only, so the two ways that I'd like to talk about neural networks with Joe. Sure, let's do it. So the current approaches are, there's been a lot of demonstrated improvements, exciting new improvements in our advancements of artificial intelligence. And those are for the most part have to do with neural networks. Everything has been around since the 1940s. It's gone to two AI winters where everyone was super hyped and then super bummed and super hyped again and bummed again. And now we're in this other hype cycle. And what neural networks are is these collections of interconnected simple compute units. They're all similar. It's kind of like it's inspired by our own brain. We have a bunch of little neurons interconnected. And the idea is these interconnections are really dominant and random, but if you feed it with some data, they'll learn to connect just like they're doing our brain in a way that interprets that data. They form representations of that data and can make decisions. But there's only two ways to train those neural networks that we have now. One is we have to provide a large data set. If you want that neural network to tell the difference between a cat and a dog, you have to give it 10,000 images of a cat and 10,000 images of a dog. You need to give it those images. And who tells you what a picture of a cat and a dog is? It's humans. So it has to be annotated. So as teachers of these artificial intelligence systems, we have to collect this data. We have to invest a significant amount of effort and annotate that data. And then we teach neural networks to make that prediction. What's not obvious there is how poor of a method there is to achieve any kind of greater degree of intelligence. You're just not able to get very far besides very specific narrow tasks of cat versus dog or should I give this person a loan or not. These kind of simple tasks. I would argue autonomous vehicles are actually beyond the scope of that kind of approach. And then the other realm of where neural networks can be trained is if you can simulate that world. So if the world is simple enough or it's conducive to be formalized sufficiently to where you can simulate it. So a game of chess is just there's rules. Game of gold there's rules. So you can simulate it. The big exciting thing about Google DeepMind is that they were able to beat the world champion by doing something called competitive self play which is they have two systems play against each other. They don't need the human. They play against each other. But that only works and that's a beautiful idea and super powerful and really interesting and surprising. But that only works on things like games and simulation. So now if I wanted to sorry to be going to analogies of like UFC for example if I wanted to train a system to become the world champion be what's his name. I'm not going to add up right. I could play the UFC game. I could create system that I create two neural networks that play use competitive self play to play in that virtual world and they could become state of the art the best fighter ever in that game. But transferring that to the physical world we don't know how to do that. We don't know how to teach systems to do stuff in the real world. Some of the stuff that freaks you out often is Boston Dynamics robots. Yeah. Yeah. Every day I go to the Instagram page and just go what the fuck are you guys doing. So engineering our demise. Mark Raber CEO spoke at the class I taught. He calls himself a bad boy of robotics. So he's having a little fun with it. He should definitely stop doing that. Don't call yourself a bad boy of anything. That's true. How old is he? Okay. He's one of the greatest roboticists of our generation. That's wonderful. However. Don't call yourself a bad boy bro. Okay. See you're not the bad boy of MMA. Definitely not. Okay. But. I'm not even the bad man. Bad man. Definitely not a bad boy. Okay. It's so silly. Yeah. Those robots are actually functioning in the physical world. That's what I'm talking about. And they are using something called. What was I think coined. I don't know 70s or 80s. The term good old fashioned AI. Meaning there is nothing like going on that you would consider artificial intelligent. Which is usually connected to learning. So these systems aren't learning. It's not like you dropped a puppy into the into the world and it kind of stumbles around and figures stuff out and learns is better and better and better and better. That's the scary part. That's the imagination. That's what we imagine is we put something in this world. At first it's like harmless and falls all over the place and all of a sudden it figures something out and like Elon Musk says that travels faster than whatever then you can only see with probe lights. There's no learning component there. This is just purely there's hydraulics and electric motors and there is 20 to 30 degrees of freedom and it's doing hard coded control algorithms to control the task of how do you move efficiently through space. So this is the task roboticists work on a really really hard problem is taking robotic manipulation taking arm grabbing a water bottle and lifting it super hard from somewhat unsolved to this point and learning to do that we really don't know how to do that. Right. But this is what we're talking about essentially is the convergence of these robotic systems with artificial intelligence systems. And as artificial intelligence intelligence systems evolve and then this convergence becomes complete you're going to have the ability to do things like the computer that beat humans that go. That's right. You're going to have creativity. You're going to have a complex understanding of language and expression. And you're going to have I mean perhaps even engineered things like emotions like jealousy and anger. I mean it's it's entirely possible that as you were saying we're going to have systems that could potentially be biased the way human beings are biased towards people of certain economic groups or certain geographic groups and you would use that data that they have to discriminate just like human beings discriminate. If you have all that in an artificial intelligence robot that has autonomy and that has the ability to move this is what people are totally concerned with and terrified of is that all of these different systems that are currently in semi crude states they can't pick up a water bottle yet. They can't really do much other than they can do backflips but they you know I mean I'm sure you've seen this more the more recent Boston dynamic ones parkour. Yeah. I saw that one the other day. Yeah they're getting better and better and better and it's it's increasing every year every year. They have new abilities.