Joe Rogan | Robots and Deepfakes w/Lex Fridman

11 views

5 years ago

0

Save

Lex Fridman

9 appearances

Lex Fridman is a scientist and researcher in the fields of artificial intelligence and autonomous vehicles and host of "The Lex Fridman Podcast." www.lexfridman.com

Comments

Write a comment...

Transcript

So what do you picture? Because we have to look at Boston Dynamics robots. Because you said walking around. I'd like to get to a sense of how you think about, and maybe I can talk about where the technology is, of what that artificial intelligence looks like in 20 years, in 30 years, that will surprise you. So you have a sense that it has a human-like form? No, I have a sense that it's going to take on the form the same way the automobile has. If you go back and look at it, CT Fletcher has a beautiful old patina pickup truck. What did he say it was from? 58 or some shit? 60? Anyway, old-ass, cool, heavy metal, those sweeping round curves those old-school pickup trucks had. Now look at that, and look at a Tesla Roadster. What in the fuck happened? What in the fuck happened? I'll tell you what happened. They got better and better and better at it. They figured out the most effective shape. If you want a motherfucker to move, that little car, have you seen that video where they have the Tesla Roadster in a drag race, or in a race against a Nissan GT-R? It's a simulated video, but it's based on the actual horsepower of each car. I don't know if you've ever driven a Nissan GT-R, but it is a fucking insane car. It's insane. This is a CGI version of what it would look like if these two cars raced against each other. So the car on the Nissan GT-R, do it from the beginning, there it goes. Look how fast this thing pulls away. The Nissan GT-R is fucking insanely fast, man, insanely fast. But this Tesla is so on another level, it's so in the future, that it's not even close. As the video gets further and further, you see how ridiculous it is. It's essentially lapping that car. It's going to go, look how far away it is. Bye, see ya. So you're saying the human race will be the Nissan here. Exactly. We're not even going to be the Nissan. We're going to be CT Fletcher's pickup truck. This is the future. There's not going to be any limitations in terms of bipedal form or wings, or not having wings if you can walk on it. I mean, there's not going to be any of that shit. And we might have a propulsion system, or it might. It's not going to be us. They might design some sort of organic propulsion system, like the way squid have and shit. Who the fuck knows? But it could also operate in the space of language and ideas. So for example, I don't know if you're familiar with, you know, OpenAI, it's a company. They created a system called GPT-2, which does language modeling. This is something in machine learning where you're basically unsupervised, let the system just read a bunch of text, and it learns to generate new text. And they've created this system called GPT-2 that is able to generate very realistic text, very realistic sounding text, not sounding, but when you read it, it makes... It seems like a person. It seems like a person. And the question there is, it raises a really interesting question. So talking about AI existing in our world, it paints a picture of a world in five, 10 years plus where most of the text on the internet is generated by AI. And it's very difficult to know who is real and who is not. Yeah. And one of the interesting things, I'd be curious from your perspective to get what your thoughts are. What OpenAI did is they didn't release the code for the full system. They only released a much weaker version of it publicly. So they only demonstrated it. And so they felt that it was their responsibility to hold back. Prior to that date, everybody in the community, including them, had open sourced everything, but they felt that now at this point, part of it was for publicity. They wanted to raise the question is when do we hold back on these systems? When they're so strong, when they're so good at generating text, for example, in this case, or at deep fakes, at generating fake Joe Rogan faces. Jamie just did one with me on Donald Trump's head. Yeah. It's crazy. And this is something that Jamie can do. He's not even a video editor. Yeah. We were talking about it before the show. We could go crazy with it if you want. It is one of those things where you go, where is this going to be in five years? Because five years ago, we didn't have anything like this. Five years ago was a joke. Right. Exactly. And then now it's still in the gray area between joke and something that could be at scale, transform the way we communicate. Do you ever go to Kyle Donaghan's Instagram page? Of course. One of the best, look at that, it's me. It's killing me. Look at this, it's killing me. This is my, it looks so much like I'm really talking. And it looks like what I would look like if I was fat. And it could, you know, of course that's really good and it could be improved significantly and it could make you say anything. Oh yeah, anything. So there's a lot of variance of this. Yeah. And then you take, like for example, full disclosure, I downloaded your fate, the entire, like have a dataset of your face. I'm sure other hackers do as well. How dare you? Yeah. So for this exact purpose, I mean, if I'm thinking like this and I'm very busy, Oh, for sure. Then there's other people doing exactly the same thing. For sure. Because you happen, your podcast happens to be one of the biggest datasets in the world of people talking in really high quality audio with high quality 1080p for most, for a few hundred episodes of people's faces. The lighting could be better. No, quite. We're doing that on purpose. We're making it degraded. We're just fucking it up to you hackers. And the mic gets in, it blocks part of your face when you talk. Oh, that's right. So the best guests are the ones where they keep the mic low. The deep fake stuff I've been using removes the microphone within about a thousand iterations. It does it instantly. It gets the, it gets rid of it, paints over the face. Wow. Yeah. So you could basically make Jorog and say anything. Yeah. I think this is just one step before they finagle us into having a nuclear war against each other so they could take over the earth. What they're going to do is they're going to design artificial intelligence that survives off of nuclear waste. And so then they encourage these stupid assholes to go into a war with North Korea and Russia and we blow each other up, but we leave behind all this precious radioactive material that they use to then fashion their new world. And we come a thousand years from now and it's just fucking beautiful and pristine with artificial life everywhere. No more, no more biological. It's too messy. Are you saying the current president is artificial life? I didn't say that. Okay. What's wrong with that? Because you're saying starting a nuclear war. No, I don't think he's, uh, he's just imagine if they did do that, they would have to have started with him in the seventies. I mean, he's been around for a long time and talking about being president for a long time, maybe electronics have been playing the long game and they got him to the position and then they can use all this grand scale of time. It's not really a long game seventies. Well, you know all about that internet research agency, right? You know about that. Uh, that's the Russian company that, uh, they're responsible for all these different Facebook pages where they would make people fight against each other. It was really, it's really kind of interesting. Um, Sam Harris had a podcast on it with, um, Renee, how do I say her name? Dresta. Dresta. Renee Dresta. And, uh, then she came on our podcast and talked about it as well. And they were, they were pitting these people against each other. Like they would have a, uh, pro Texas. Secession rally and directly across the street from a pro Muslim rally. And they would do it on purpose and they would have these people meet there and get angry at each other. And they would, they would pretend to be a black lives matter page. They would pretend to be a white Southern pride page. And they were just trying to make people angry at people. Now that's human driven manipulation. Now imagine this is my biggest worry of AI is what Jack is working on is the algorithm driven manipulation of people unintentional. Yeah. Trying to do good. But like those people, uh, Jack needs to do some jiu jitsu. It needs to be, it needs to be some open-minded, uh, you know, uh, like really understand society transparency to where they can talk to us is, uh, to the people in general, how they're thinking about, uh, uh, managing these conversations. Because you talk about these groups, very small number of Russians are able to control very large amounts of people's opinions and the arguments. Yeah. An algorithm can do that 10 X. Oh yeah. And more of us will go on Twitter and Facebook and digital media. Yeah, for sure. For sure. I think it's coming. I think, um, once people figure out how to manipulate that effectively and really create like an army of fake bots that will assume stances on a variety of different issues and just argue into infinity. We're not going to know. We're not going to know who's real and who's not. Well, it'll change the nature of our communication online. I think it might, it might have effects. This is the problem of the future. It's hard to predict the future. It might have effects where we'll stop taking anything online seriously. Yeah, for sure. And we might get retract back to, uh, communicating in person more. I mean, there, there could be effects that we're not anticipating totally. And there might be some, uh, some ways in virtual reality, we can authenticate our identity better. Mm-hmm. So it'll change the nature of communication, I think. The more, the more you can generate fake text, uh, then the more the, uh, we'll distrust the information online and the way that changes society is totally an open question. We don't know. But your, um, what are your thoughts about the open AI? Do you think they should release or hold back on it? Because this is, we're talking about AI. So artificial life, there's stuff you're concerned about. Some company will create it. Mm-hmm. So the question is, what is the responsibility of that, uh, short video, what it looks like when they just type a small paragraph in here, hit a button. It says how open AI writes what? What does it say? Shh. What did it say, Jamie? Good fencing news stories. Okay. So you give it a- Brexit has already cost the UK economy at least 80 billion since, and then many industries, I believe, like, so much in front of them. So they just, it just fills in those things? Yeah. So basically you give it, you start the text. Oh, wow. And, uh, Joe Rogan experiences the greatest podcast ever, and then let it finish the rest. Wow. And it'll start explaining stuff about why it's the greatest podcast. Is it accurate? Oh, look at this. It says, a move that threatens to push many of our most talented young brains out of the country and out of the campuses in the developing world. This is a particularly costly blow. Research by Oxford University warns that the UK would have to spend nearly 11, 1 trillion on post-Brexit infrastructure. That's crazy that that's all done by an AI. Yeah. That's like spelling this out in this very convincing argument. The thing is, the way it actually works, algorithmically, is fascinating because it's generating it one character at a time. It has, as far, you know, you don't want to discriminate against the AI, but as far as we understand, it doesn't have any understanding of what it's doing, of any ideas it's expressing. It's really stealing ideas. It's like the largest scale plagiarizer of all time, right? It's basically just pulling out ideas from elsewhere in an automated way. And the question is, you could argue us humans are exactly that. We're just really good plagiarizers of what our parents taught us, of what our previous so on. Yeah, we are for sure. Yeah. So the question is whether you hold that back. Their decision was to say, let's hold it. Let's not release it. That scares me. Do not release it. Yeah. Yeah. You know why it scares me? It scares me that they would think that that's like this mindset that they sense the inevitable. The inevitable meaning that someone's going to come along with a version of this that's going to be used for evil, that it bothers them that much, that it seems almost irresponsible for the technology to prevail, for the technology to continue to be more and more powerful. They're scared of it. They're scared of it getting out, right? Yeah. That scares the shit out of me. Like, if they're scared of it, they're the people that make it, and they're called open AI. I mean, this is the idea behind the group where everybody kind of agrees that you're going to use the brightest minds and have this open source so everybody can understand it. And everybody can work at it, and you don't miss out on any genius contributions. And they're like, no, no, no, no. No more. And they're, obviously, their system currently is not that dangerous. They're using it. Yes. Well, not, yes, not that dangerous. But that, if you just saw that, that it can do that? But if you think through what that would actually create, I mean, it's possible it would be dangerous, but it's not, the point is they're doing it, they're trying to do it early to raise the question, what do we do here? Because, yeah, what do we do? Because they're directly going to be able to improve this now. Like, if we can generate, basically, 10 times more content of your face saying a bunch of stuff, what is that, what do we do with that? If Jamie, all of a sudden, on the side, develops a much better generator and has your face, does an offshoot podcast, essentially, fake Joe Rogan experience, and what do we do? Does he release that? You know, does he, because now we can have, basically, generate content at a much larger scale that will just be completely fake. Well, I think what they're worried about is not just generating content that's fake, they're worried about manipulation of opinion. Right. Right? If they have all these people, like that little sentence that led to that enormous paragraph in that video was just a sentence that showed a certain amount of outrage and then it let the AI fill in the blanks. You could do that with fucking anything. Like, you could just set those things loose. If they're that good and that convincing and they're that logical, man, this is not real. I'll just tell you. Ben Shapiro all creates, AI creates fake Ben Shapiro. Such as Ford's, hello there, this is a fake Ben Shapiro with this technology. They can make me say anything such as, for example, I love socialism. Healthcare is a right, not just a privilege. Handing guns will solve crime. Facts care about your feelings. I support Bernie Sanders. Okay. Yeah. Yeah. That's crazy. It's crude, but it's on the way. Yeah. It's on the way. It's all on the way. And we have to, this is the time to talk about it. This is the time to think about it. One of the funny things about Kyle Dunnegan's Instagram was that it's obviously fake. That's one of the funny things about it. It's like South Park's animation. It's like the animation sucks. That's half the reason why it's so funny because they're just like these circles, you know, these weird looking creature things that move what, and when the Canadians, when their heads pop off at the top. And my hope is this kind of technology will ultimately just be used for memes as opposed to something that's going to get wars. Putin's going to be, he's going to be banging Mother Teresa on the White House desk in a video and we're going to be outraged. We're going to go to war over this shit. We're going to go to war over this shit.