Are Social Media Algorithms More Dangerous Than Killer Robots?

17 views

5 years ago

0

Save

Lex Fridman

9 appearances

Lex Fridman is a scientist and researcher in the fields of artificial intelligence and autonomous vehicles and host of "The Lex Fridman Podcast." www.lexfridman.com

Comments

Write a comment...

Transcript

There has been research done on making artificial insects that have like little cameras inside of them that look like a dragonfly or some sort of bug. They fly around and they can film things. And the thing that terrifies a lot of people is going more microscopic than that. More like robots inside the body that help you clear diseases and so on and so on. And that are certain things even at the nano scale. So basically creating viruses. Creating new viruses. Little tiny ones. Yeah. And if they learn, they can be pretty dumb. But on a mass scale, you don't have to be intelligent to destroy all of human civilization. So the real question about this artificial intelligence stuff that everybody seems to... The ultimate end of the line, what Sam Harris is terrified of, is it becoming sentient and making its own decisions? And deciding that we don't need people. That's what everybody is really scared of, right? I'm not sure if everybody is scared of it. Yeah, they might be. I think that's the story that's the most compelling, the sexiest story that the philosopher side of Sam Harris is very attracted to. Yeah. I am also interested in that story. But I think achieving sentience. I think that requires also creating consciousness. I think that requires creating the kind of intelligence and cognition and reasoning abilities that's really, really difficult. I think we will create dangerous software based systems before then. That will be a huge threat. I think we already have them. The YouTube algorithm, the Twitter, the recommender systems of Twitter and Facebook and YouTube. From everything I know, having talked to those folks, having worked on it. The challenging aspect there is they have the power to control minds, what the mass population thinks. And YouTube itself and Twitter itself don't have direct ability to control the algorithm exactly. One, they don't have a way to understand the algorithm and two, they don't have a way to control it. But what I mean by control is, control it in a way that leads to, in aggregate, a better civilization. Meaning like, sort of the Stephen Pinker, the better Angela Zavar nature, so to encourage the better sides of ourselves. It's very difficult to control a single algorithm that recommends the journey of millions of people through the space of the internet. And I think that intelligence instilled in those algorithms will have a much more potentially either positive or detrimental effect than sentient killer robots. I hope we get to sentient killer robots because that problem I think we can work with. I'm very optimistic about the positive aspects of approaching sentience, of approaching general intelligence. There's going to be a huge amount of benefit and I think there will be a lot of mechanisms that can protect against that going wrong. Just from knowing the, we know how to control intelligent systems when they are sort of in a box, when they're singular systems. When they're distributed across millions of people and there's not a single control point, that becomes really difficult. And that's the worry for me is the distributed nature of dumb algorithms on every single phone. Sort of controlling the behavior, adjusting the behavior, adjusting the learning journey of different individuals. So like, to me the biggest worry and the most exciting thing is recommender systems, what they're called. At Twitter, at Facebook, at YouTube, YouTube especially. That one has just like I think you mentioned, there's something special about videos in terms of educating and sometimes indoctrinating people. And YouTube has the hardest time, I mean they have such a difficult problem on their hands in terms of that recommendation. Because they don't, this is a machine learning problem, but knowing the contents of tweets is much easier than knowing the contents of videos. Our algorithms are really dumb in terms of being able to watch a video and understand what's being talked about. So all YouTube is looking at is the title and the description and that's it, mostly the title. It's like basically keyword searching. And it's looking at the clicking, viewing behavior of the different people. So like it figures out that the flat earth supporters enjoy these kinds of videos, it forms a different kind of cluster and makes decisions based on that. By the way, it seems to make definitive decisions about, you know, it doesn't like flat earth YouTube I think. Well YouTube in particular, they're trying to do something about the influx of conspiracy theory videos and the indoctrination aspect of them. That, you know, one of the things about videos is like say if someone makes a video and they make it on a very particular subject and they speak eloquently and articulately, but they're wrong about everything they're saying. They don't understand the science. Say if they're talking about artificial intelligence, they're saying something about things that you are an expert in. They could, without being checked, without someone like you in the room that says that's not possible because of X, Y, and Z, without that they can just keep talking. So one of the things that they do, whether it's about flat earth or whether it's about dinosaurs being fake or nuclear bombs being fake, they can just say these things and they do it with an excellent grasp of the English word. So they say it, they're very compelling in the way they speak. They'll show you pictures and images and if you are not very educated and you don't understand that this is nonsense, and especially if you're not skeptical, you can get roped in. You can get roped in real easy and that's a problem. And it's a problem with some of the people that work in these platforms. Their children get indoctrinated and they get angry. Their children get indoctrinated. Now, what's interesting is they get indoctrinated also with right-wing ideology and then people get mad that they're indoctrinated by Ben Shapiro videos. So they'll get pissed off with that. Well, but you're okay with left-wing. Why? Because you're left-wing. So then it becomes like, okay, what is a problem? What's really a problem and what is just something that's opposed to your personal ideology? And who gets to make that distinction? And that is where the arguments for the First Amendment come into play. Like, should these social media companies that have massive amounts of power and influence, should they be held to the same standards as the First Amendment? And should these platforms be treated as essentially a town hall, like where anyone can speak? And there's a platform. And there's a real problem that there's not that many of them. This is a real problem. The real problem is like Twitter is the place where people go to argue and talk about shit. And Twitter maybe has a competitor on Facebook, but YouTube certainly doesn't have a competitor. YouTube doesn't have any competitor. I mean, there's Vimeo. There's a few other platforms, but realistically, it's YouTube. You know, YouTube is a giant, giant platform. What is this? Alphabet reports YouTube ad revenue for the first time. Video service generated $15.1 billion in 2019. Holy shit. In comparison, I just looked up Twitch ad revenue was supposedly around $500 to $600 million. Wow, that's a big difference. And what about Facebook? Facebook is stupendously valuable. Probably way higher than that, but this is the first time it's so reported. By the way, Facebook, I don't think pays like YouTube paid for my McDonald's burgers yesterday. Yeah, Facebook's not right. Facebook is not on Twitter and Instagram. I don't think they're paying you like directly. There's a lot of calls to break up Facebook. I'm not. I mean, I'm on Facebook, but I'm not on it. I don't use it. It's just connected to my Instagram. When I post something on Instagram, it goes to Facebook as well. I never go to Facebook. There's a Joe Rogan Facebook group that's dumped the fire of brilliant folks. Let me just put it that way. Look at this. Facebook's revenues amounted to $21.8 billion. It was just the fourth quarter. Jesus Christ, just the fourth quarter. The majority of which were generated through advertising. The company announced over 7 million active advertisers on Facebook during the third quarter of 2019. Probably, though, also adds an Instagram. That thing with YouTube is just YouTube, not Google, not YouTube Premium, not anything else. Just the ad revenue. And to be fair, so the cash they have, they spend like Facebook AI research groups. Some of the most brilliant. It's a huge group that's doing general open-ended research. Google Research, Google Brain, Google DeepMind are doing open-ended research. Like they're not doing the ad stuff. They're really trying to build. That's the cool thing about these companies having a lot of cash, is they can bring some of the smartest people and let them work on whatever. In case it comes up with a cool idea. Like autonomous vehicles with Waymo. It's like, let's see if we can make this work. Let's throw some money at it. Even if it doesn't make any money in the next 5, 10, 20 years. Let's make it work. That's the positive sort of side of having that kind of money. Yeah, that makes sense. That there is, as long as they keep doing those kind of things. The real concern though is that they're actually severely influencing the democratic process. It's difficult. Certainly Jack Dorsey. Jack Dorsey in terms of the CEOs who interact with, I think was one of the good guys. Yes, I agree. Yeah. I mean he's... He wants a Wild West Twitter. Well he doesn't know it. He wants a good Twitter. He's kind of thinking about Wild West. But his ideas have two. Oh, two Twitter. One that's filtered and one that's like... Anything goes. Cool. Yeehaw.