Joe Rogan Discusses Self Driving Car Deaths with Scientist Lex Fridman

9 views

5 years ago

0

Save

Lex Fridman

9 appearances

Lex Fridman is a scientist and researcher in the fields of artificial intelligence and autonomous vehicles and host of "The Lex Fridman Podcast." www.lexfridman.com

Comments

Write a comment...

Transcript

What's the matter? I don't remember if we brought this up last time, but I just remembered seeing this video where you're playing guitar while you were driving. Yep. Well, you shouldn't do that, dude. There's a reason why he was doing it. Why are you doing that? It's on a test track. Oh, what kind of car is that? Looks like a Lincoln. Lincoln FKZ, yes. Oh, they do that? The Lincolns do that? No, we converted it and that's our code controlling the car. Wow. And I'm playing... That is crazy. So you converted this car to drive autonomously? Autonomously, yeah. Wow. And what exactly do you have to do to a car to change... Because that car does not have the capacity to do anything like that. Right? Am I correct? No, no, no, absolutely not. But you are absolutely correct. The first part is being able to control the car with a computer, which is converting it to be drive by wire. So you can control the steering and the braking and the acceleration to basically be able to control with a joystick. And then you have to put laser sensors all around the car? Is that what you're doing? Any kind of sensor and software. What's the best kind of sensor? Is it optical, laser? A lot of debate on this. And this is the big... This is the throw down between Elon Musk and everybody else. Oh, okay. So Elon Musk says the best sensor is camera. Everybody else... Well, everybody else says that at this time LIDAR, which are these lasers... Yes. That's the best sensor. So I'm more on the side, in this case on camera, on Elon Musk. So here's the difference. Lasers are more precise. They work better in poor lighting conditions. They're more reliable. You can actually build safe systems today that use LIDAR. The problem is that they don't have very much information. So we use our eyes to drive. And camera is the same thing. And they have just a lot more information. So if you're going to build artificial intelligence systems, so the machine learning systems that learn from huge amounts of data, camera is the way to go. Because you can learn so much more. You can see so much more. So the richer, deeper sensor is camera. But it's much harder. You have to collect a huge amount of data. It's a little bit more futuristic. So it's a longer term solution. So today to build a safe vehicle, you have to go LIDAR. Tomorrow, however you define tomorrow, Elon Musk says it's in a year. Others say it's 5, 10, 20 years. Camera is the way to go. That's the hard debate. There's a lot of other debates, but that's one of the core ones. Basically for camera, if you go camera like you do in the Tesla, there's seven cameras in your Tesla. Three looking forward. There's all around. So on one looking inside. No, you have the Model S? Yeah. Yeah. So that one doesn't have a camera that's looking inside. So it's all cameras plus radar and ultrasonic sensors. That approach requires collecting huge amounts of data. And they're doing that. They drove now about 1.3 billion miles under autopilot. Jesus. Yeah, it's a very large amount of data. So you're talking about over 500,000 vehicles have autopilot, 450, I think, 1,000 have the new version of autopilot, autopilot two, which is the one you're driving. And all of that is data. So all of those, all the edge cases, what they call them, all the difficult situations that occur is feeding the machine learning system to become better and better and better. And the open question is how much better does it need to get to get to the human level performance? I think one of the big assumptions of us human beings is that we think that driving is actually pretty easy. And we think that humans suck at driving. Those two assumptions. We think like driving, you know, you stay in the lane, you stop for the stop sign, it's pretty easy to automate. And then the other one is you think like humans are terrible drivers. And so it'll be easy to build a machine that outperforms humans at driving. Now there's, that's, I think there's a lot of flaws behind that intuition. We take for granted how hard it is to look at the scene like everything you just did picked up, moved around some objects. It's really difficult to build an artificial intelligence system that does that. To be able to perceive and understand the scene enough to understand the physics of the scene, like all these objects that it like how to pick them up, the texture of those objects, the weight to understand glasses folded and unfolded, open water bottle, all those things is common sense knowledge that we take for granted. We think it's trivial, but there is no artificial system in the world today, nor will there be for perhaps quite a while that can reason, do that kind of common sense reasoning about the physical world. Add to that pedestrians. So add some crazy people in this room right now to the whole scene. Right. And being able to notice like, this guy's an asshole. Look at him. What is he doing? What is he doing? Get off that skateboard. Oh, Jesus, he's in traffic. Yep. And the considering not that he's an asshole, he's a respectable skateboarder, that in order to make him behave a certain way, you yourself have to behave a certain way. So it's not just you have to perceive the world. You have to act in a way that you have to assert your presence in this world. You have to take risks. So in order to make the skateboarder not cross the street, you have to have accelerate if you have the right of way. And there's a game theoretic, a game of chicken to get right. I mean, we don't even know how to approach that. As an artificial intelligence research community and also as a society, do we want an autonomous vehicle that speeds up in order to make a pedestrian not cross the street, which is what we do all the time. We have to assert our presence. If there's a person who doesn't have the right of way, who begins crossing, we're going to either maintain speed or speed up potentially if we want them to not cross. So that game there to get that right. That's a dangerous game for a robot. It's for a robot. And for us to be rational, if that, God forbid, leads to fatality, for us as a society to rationally reason about that and think about that. I mean, a fatality like that could basically bankrupt the company. There's a lawsuit going on right now about an accident in Northern California with Tesla. Yeah. Are you aware about that one? Yeah. What was the circumstances about that one? So there was, I believe, in Mountain View, a fatality in a Tesla where it, this is a common problem for all lane keeping systems like Tesla Autopilot is there was a divider in the highway and basically the car was driving along the lane and then the car in front moved to an adjacent lane and this divider appeared. So you have to now steer to the right and the car didn't and it went straight into the divider. Oh, wow. But basically what that boils down to is the car drifted out of lane or didn't adjust properly to the lane. And those kinds of things happen. And this is because the person was allowing the Autopilot to do everything? Nope. You can't. So we have to be extremely careful here. I don't know the really deep details of the case. I'm not sure exactly how many people do. So there's a judgment on what the person was doing and then there's an analysis of what the system did. The system did it drifted out of lane. The question is, was the person paying attention and was there enough time given for the person to take over and if they were paying attention to catch the vehicle, steer back onto the road. As far as I believe, the only information they have is hands on steering wheel and they were saying that like half the minute leading up to the crash, the hands weren't on the steering wheel or something like that. Basically trying to infer whether the person paying attention or not. But we don't have the information exactly. Where were their eyes? You can only make guesses as far as I know again. So the question is, this is the eyes on the road thing because I think I've heard you on a podcast saying you're tempted to sort of look off the road at your new Tesla or at least become a little bit complacent. That's your worry. The worry is that you just rely on the thing, that you would relax too much. But what would that relaxation lead to? The problem is if something happened. If you weren't, when you're driving, I mean we've discussed this many times on the podcast that the reason why people have road rage, one of the reasons is because you're in a heightened state because cars are flying around you and your brain is prepared to make split second decisions and moves. The worry is that you would relax that because you're so comfortable with that thing driving. Everybody that I know that's tried that, they say you get really used to it doing that. Get really used to it just driving around for you. So the question is what happens when you get used to it? Do you start looking off road? Do you start texting more? Do you start watching a movie, et cetera? That's really an open question. And like for example, we just did the study, published a study from MIT on what people in our dataset, we collected this dataset of 300,000 miles in Teslas, we instrumented all these Teslas and watched what people are actually doing and are they paying attention when they disengage the system. So there's a really important moment here, we have 18,000 of those, when the person catches the car, the disengage autopilot. And that's a really, Tesla uses this moment as well. It's a really important window into difficult cases. So some percentage of those, some small percentage, it's about 10%, is we call them tricky situations. Is situations where you have to immediately respond, like drifting out of lane, if there's a stop car in front, so on. The question is, are people paying attention during those moments? So in our dataset, they were paying attention. They were still remaining vigilant. Now in our dataset, the autopilot was quote unquote, encountering tricky situations every 9.2 miles. So you could say it was failing every 9.2 miles. That is one of the reasons we believe that people are still paying, remaining vigilant, that it's regularly and unpredictably sort of drifting out of lane or misbehaving. So you don't over trust it. You don't become too complacent. The open question is, when it becomes better and better and better and better, will you start becoming complacent? When it drives on the highway for an hour, an hour and a half, and is it supposed to 9.2 miles, make that 50 miles, 60 miles, do you start to over trust it? And that's a really open question. Do you think, or do you anticipate a time in anywhere in the near future where you won't have to correct? You will allow the car to do it because the car will be perfect? The car, first of all, will never be perfect. No car will ever be perfect. Autonomous vehicles will always, you think, require at least some sort of manual override? Yeah. Really? That's interesting that you're saying that because you work in AI. What makes you think that that's impossible to achieve? Well let's talk because you're using the word perfection. I think perfection is- Okay, that's a bad word. So I guess you're implying- Let me see, will it achieve, because people are obviously not perfect, will it achieve a state of competence that exceeds the human being? And let's put it in a dark way, competence measured by fatal crashes. Yes. Yes, I absolutely believe so. And perhaps in the near term. Near term, like five years? Yeah, for me, five, 10 years is near term. For Elon, in Elon Musk time, that's converted to one year. Have you met him? Yes, interviewed him recently. Fascinating cat, right? Yep. Got a lot of weird shit bouncing around behind those eyeballs. You don't realize until you talk to him in person, you're like, oh, you got a lot going on in there, man. Yeah, there's passion, there's drive. I mean, it's one of the- It's a hurricane of ideas. Yeah. And focus and confidence. Mm-hmm. I mean, the thing is, in a lot of the things he does, which I admire greatly from any man or woman innovator, it's just boldly, fearlessly pursuing new ideas or jumping off the cliff and learning to fly on the way down. That's, I mean, no matter what happens, he'll be remembered as the great innovators of our time. Whatever you say, maybe in my book, Steve Jobs was as well, even if you criticize, perhaps he hasn't contributed significantly to the technological development of the company or the different ideas they did. Still, his brilliance was in all the products of iPhone, of the personal computer, the Mac, and so on. And I think the same is true with Elon. And yes, there's, in this space of autonomous vehicles, of semi-autonomous vehicles, of driver assistance systems, it's a pretty tense space to operate in. There's several communities in there that are very responsible, but also aggressive in their criticism. So in driving in the automotive sector, obviously, since Henry Ford and before, there's been a culture of safety of just great engineering. These are like some of the best engineers in the world in terms of large scale production. You talk about Toyota, you talk about Ford, GM, these people know how to do safety well. And so here comes Elon with Silicon Valley ideals that throws a lot of it out the window and says, we're going to revolutionize the way we do automation in general. We're going to make software updates to the car once a week, twice a week, over the air, just like that. That makes people and the safety engineers and human factors engineers really uncomfortable. Like what do you mean you're going to keep updating the software of the car without, like, how are you testing it? That makes people really uncomfortable. Why does it make them uncomfortable? Because the way in the automotive sector you test the system, you come up with a design of the car, every component, and then you go through like really rigorous testing before it ever hits the road. Here's an idea from the Tesla side is where they basically, they in shadow mode test the software, but then they just release it. So essentially the drivers become the testing and then they regularly update it to adjust if any issues arise. That makes people uncomfortable because there's not a standardized testing procedure. There's not at least a feeling in the industry of rigor because the reality is we don't know how to test software in the same kind of, with the same kind of rigor that we've tested the automotive system, tested automotive system in the past. So I think it's extremely exciting and powerful to make software sort of approach automotive engineering with at least in part a software engineering perspective. So just doing what's made Silicon Valley successful. So updating regularly, aggressively innovating on the software side. So your Tesla over the air while we're sitting here could get a totally new update. The flip of a bit as Elon Musk says, it can gain all new capabilities. That's really exciting, but that's also dangerous. And that balance we- Well, what's dangerous about it? That it'd be faulty software? Faulty, a bug. So if the apps on your phone fail all the time, we're as a society used to software failing and we just kind of reboot the device where we start the app. Most complex software systems in the world today, if we think outside of nuclear engineering and so on, they're really nobody, they're too complex to really thoroughly test. So thorough, complete testing, proving that the software is safe is nearly impossible on most software systems. It's nerve wracking to a lot of people because there's no way to prove that the new software update is safe. So what is the process? Do you know how they create software, they update it, and then they test it on something? How much testing do they do and how much do they do before they upload it to your car? Yeah, so I don't have any insider information, but I have a lot of sort of public available information, which is they test the software in shadow mode, meaning they see how the new software compares to the current software by running it in parallel on the cars and seeing if there's disagreements, like seeing if there's any major disagreements and bringing those up and seeing what... By parallel? I'm sorry, do you mean both programs running at the same time? One, the original update, yes, at the same time, the original update actually controlling the car and the new update is just... Making the same decisions? Making the same decisions without them being actuated. Without actually affecting the vehicle's dynamics. And so that's a really powerful way of testing. I think the software infrastructure that Tesla has built allows for that, and I think other companies should do the same. That's a really exciting, powerful way to approach, not just automation, not just autonomous vehicles or semi-autonomous vehicles, but just safety, is basically all the data that's on cars, bring it back to a central point to where you can use the edge cases, all the weird situations in driving to improve the system, to test the system, to learn, to understand where the car is used, misused, how it can be improved, and so on. That's extremely powerful. How many people do they have that are analyzing all this data? That's a really good question. So they have... The interesting thing about driving is most of it is pretty boring. Nothing interesting happens. So they have automated ways of extracting, again, what are called edge cases, so these weird moments of driving. And once you have these weird moments, they have people annotate. I don't know what the number is, but a lot of companies are doing this. It's in the hundreds and the thousands. Basically, if humans annotate the data to see what happened. But most of what they're trying to do is to automate that annotation, so to figure out how the data can be automatically used to improve the system. So they have methods for that, because it's a huge amount of data. I think in the recent autonomy day a couple of weeks ago, this big autonomy day where they demonstrated the vehicle driving itself on a particular stretch of road, they showed off that they're able to query the data, basically ask questions of the data, saying... The example they gave is there's a bike on the back of a car, the bicycle on the back of a car. And they're able to say, well, when the bicycle is in the back of a car, that's not a bicycle. That's just the part of the car. And they're able to now look back into the data and find all the other cases, the thousands of cases that happened all over the world, in Europe, in Asia, in South America, in North America, and so on, and pull all those elements and then train the perception system of autopilot to be able to better recognize those bicycles as part of the car. So every edge case like that, they go through, saying, okay, the car freaked out in this moment. They find moments like this in the rest of the data and then improve the system. So this kind of cycle is the way to deal with problems, with failures of the system. It's to say, every time the car fails at something, say, is this part of a bigger set of problems? Can I find all those problems? And can I improve it with a new update? And that just keeps going. The open question is how many loops like that you have to take for the car to become really good, better than human. Basically how hard is driving? How many weird situations when you manually drive do you deal with every day? Somebody mentioned, I don't know, there's like millions of cases when you watch video, you see them. Somebody mentioned that they drive a truck, a UPS truck, with cow pastures, and they know that if there's no cows in the cow pasture, that means they're grazing. And if they're grazing, I mean, I be using the correct terms, I apologize, not a cow guy, that means that there may be cows up ahead on the road. There's just this kind of reasoning you can use to anticipate difficult situations. And we do that kind of reasoning about everything. Cars today can't do that kind of reasoning. They're just perceiving what's in front of them.