Joe Rogan Talks Artificial Intelligence with a Yale Professor

89 views

6 years ago

0

Save

Nicholas Christakis

2 appearances

Nicholas A. Christakis is the Sterling Professor of Social and Natural Science at Yale University, where he also directs the Human Nature Lab, and serves as Co-Director of the Yale Institute for Network Science. His most recent book is Apollo's Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live. https://www.amazon.com/Apollos-Arrow-Profound-Enduring-Coronavirus/dp/0316628212

Comments

Write a comment...

Transcript

How much time have you put into artificial intelligence? A lot. We do a lot of work in my lab on AI. What about sex robots? Like what rules should they give for sex robots? Yeah. And how much could that damage interpersonal relationships? Yes. That's a great question. That's exactly the right question in my view. So our concern with sex robots, from a liberty point of view, should not in the slightest be whether you enjoy a sex robot. It's your business. Right. Do what you want. I really don't – I see – I would be hard pressed to object. The problem is with sex – let's back up from the less provocative – let's come back to sex robots. Let's take a simpler example first. Let's talk about your children talking to Alexa. Okay. So the person who designs Alexa wants to make your child's experience easy and pleasant. And as part of the programming of Alexa, because they want to make Alexa the obedient servant of your child, it doesn't require your child to say, please, Alexa, would you play the music for me? Your child can be as rude as she wants to Alexa, and Alexa will do what she wants. What you should be concerned about, however, is not your child's interaction with Alexa. What you should be concerned about is what your child is learning from interacting with Alexa that then she takes to the playground. So now she's rude to other children. So Alexa is corroding our social fabric. Alexa, in this example, is making children rude to each other. So our concern is not so much, do we make – and do we make, you know, like Asimov's laws of robotics, do we – it's not that we want to program the robot so that they don't harm you. It's true, the first law. We don't want the robot to, through an act of commission or omission, harm or allow a human to come to be harmed. It's that we're concerned about how the robot in interacting with you might cause you to harm others. The robot – the robotic intelligence creates these externalities, these cascade effects. So in the Alexa example, we might want to regulate the programming of devices that speak to children, not because we want to deprive your daughter of the right to speak how she wants, but because we recognize that that robot is going to cause your daughter to be rude to other people. Is it really? Do you really think that the Alexa – Yes, the Alexa example – Alexa, what's the weather? That that would make your child – Slowly but surely, I think it will contribute. So it's an example. It's not like – I'm not arguing that Alexa should become ornately – I think it's so novel to kids that they know it's not a person. I don't think it really – All right, but we're using these examples to build the thing. So let's talk about the sex robots now. So some people believe that actually the emergence of sex robots, which will surely appear in the next 10 or 20 years, will be a fantastic boon. They think that people will be able to experiment. You'll be able to experiment with same-sex relationships, for example, group sex. You might learn to be a better lover, so you could practice with the robots and therefore you would be more experienced when you were having sex with a real human. So you can't get venereal diseases from a sex robot. You can't hurt their feelings. So people think that the argument based on ethical grounds is that this would be terrific, that this will be a benefit. Other people have the opposite opinion. Other people think that actually having sex with robots, first of all, is symbolically and conceptually vile. They think that it takes sex and converts it into a kind of a – literally a machine-like function. And they furthermore think that it would result in one having a kind of anonymous or impersonal interactions with human subsequently, that you'll be entrained to, let's say, want an obedient partner, for example. I don't have a stand on this. I don't know which way it's going out. And in a way, I don't have to make a stand on it because what I'm interested in recognizing is that when we talk about allowing people to have sex with sex robots, not allowing that it's going to happen, the focus of our concern should be not, what is your experience in your bedroom when you have sex with a sex robot? Our concern is a state. Like, my interest, I have no stake or control over what you're doing over there. But my interest is in once you have had that experience, how does that change how you interact with other people? And there, I think, just like anything else, you can make all the garbage you want in your house, but if you start polluting the environment, you're harming me. So now I have a reason for intervening in your activities on your land. You can't pollute your own land if that pollution runs off onto my land. And so the similar argument can be made. Or look at autonomous vehicles. Here's an example. Right now we have all roads, almost all roads, have just human drivers. And in 20 or 30 years, almost all roads will probably have only non-human drivers, machines will drive. And those autonomous vehicles probably can be yoked together. They can communicate with each other so that you'll have like trains of cars moving in synchrony. Like each of them will be communicating with the other nearby cars, and you'll have laminar flow where all these vehicles are smoothly moving and joining the highway and leaving the highway and communicating on a citywide scale, slowing traffic down miles away because they anticipate with AI that there'll be a jam here if they don't do that. And I think that'll be actually great. I'm actually looking forward to autonomous. I mean, I still like to take my car to a speedway, but, you know, drive itself with stick, which I like. But, you know, but in between, we're going to have a world of what I call hybrid systems of human-driven cars and autonomous vehicles coexisting on a plane, on an even plane. And we need to be worried about that because these autonomous vehicles, when we interact with them, are going to change how we interact with each other. For example, do we program the autonomous vehicle to drive at a constant steady speed? If you're the designer of the car, you might say, gee, I don't want this car to crash. I want the car to drive in a very predictable fashion, and that's what's best for the occupants of the car. That's what's going to allow me to sell more vehicles. But it may be the case that actually when people are in contact with such a vehicle, they get lulled into a false sense of security. Oh, that vehicle never does anything new. I don't need to pay so much attention to the car in front of me. I just drive, you know, at a steady clip. And then they veer off, and they go to a part of the highway where they're just human drivers. And now, having been lulled into a false sense of security, they cause more collisions. They're not paying attention. So that autonomous vehicle has changed how I drive in a way that harms other people. So maybe the programming of the vehicle should be to occasionally do erratic things, to like suddenly slow down or speed up a little bit, obliging me to stay vigilant and pay attention as I'm interacting with that car, so that then when I go to another part of the highway, when I interact with just humans, I have retained that vigilance. Once again, the lesson here is that it's not just about the one-on-one interaction between the robotic artificial intelligence and the human being. It's about how the robots affect us. And in my lab, we do many experiments in social systems where we take a group of people and we drop online, we drop a bot, or in the laboratory, we have a physical robot, and we watch how the presence of the robot doesn't just modify how the human interacts with the robot, but how the humans interact with each other. So if we put a robot right there looking at us with its third eye, would we, you know, would it change how you and I talk to each other, make us different? That's the experiments we're doing. Well, clearly in the sex robot realm, that's going to be a problem. We see the difference between humans that have porn addictions. Yeah, that's a good example. Yeah, porn addictions, when people do, they develop this very impersonal way of communicating with people and they think about sex and the objectification of the opposite sex in a very different reason, a very different way. It flavors the way you think of— It flavors your expectations, yes. Yes, and it makes it difficult, it can make it difficult for you to have normal sexual relationships if you come to see if your expectations are guided by porn. And that is going to be radically magnified by some sort of artificial life form that you created that's indistinguishable. Yes. If you can have an indistinguishable sex partner that is, you know, some incredibly beautiful woman that is a robot and then you— Or men. Or many women would be quite happy to change their spouses for robots. I wonder if women are going to be as into it as men, because I think women desire more emotional intimacy than I think— I mean, on a scale than men do. I think the jury's still out on what the relative balance between men and women— we might be surprised that we'll be replaced with male sexbots. Right, especially given societal expectations and women conformed those and— And also given how a pain in the ass a lot of men can be. Sure. So it could go both ways. I'm not prepared to make a prediction. Who's going to be better off in the gender debate with the emergence of sex robots? It may be the way you suggest, I don't know. Well, we're also in this weird transition genetically where they're doing genetic experiments on humans and with the advent of CRISPR and emerging technologies. Yes, I talked about that in the book, too. Entirely possible that there's not going to be any frumpy bodies anymore. That's hundreds of years away, but yes. Is it? Yes, I think so. I wonder. I mean, I don't know if it is. I think if they start cracking them out in China and they start giving birth to eight-foot-tall supermen, 12-inch dicks, we're going to have a real issue. Yes, yes, we will. Yes, that's the least of it. Yes. But I mean, it's really entirely possible that in the future they're going to have that, that we're going to have perfect humans. Yes, that's likely. Yes, I think that is likely. The debate is how far in the future. So I don't think we're going to start by using these technologies to cure monogenic diseases. So, you know, like thalassemia, for example. So diseases are certain immune deficiencies. A disease where a single gene is defective and those will be the initial targets. But once we start with that, eventually I think there will be people who will want to genetically engineer other people, their offspring, for example, and modify them in the ways that you suggest, maybe not 12-inch dicks, but maybe, you know, ability to run fast or something else. Sure, far smarter. I mean, isn't that one of these side effects that they showed with the genetic manipulation of these Chinese babies to eliminate HIV, that they made them smarter? No, I don't know if they made them smarter. What's clear from the most recent findings I've seen from that case is that unsurprisingly, as anyone could predict, the technology is not good enough to restrict the mutations to one particular region of the genome. So there were other changes in the genome in these children that occurred elsewhere rather than the targeted region, which was to increase their immunity to HIV. Right. And we don't know what those are. Those could kill those kids quickly. We could make them better in some ways. We have no way of knowing. But I think the conclusion was that it increased their intelligence. I have not seen those results, and I think it would be premature to come to that conclusion. Their problem is also sensationalist clickbait, which is that's what you want to click. Not just that they did the HIV, and they made them smarter, it was going to get like 40% more clicks. Yes. Versus, you know. Yeah, whoo, 40%. I mean, that's just the nature of humans, right? Yes. Just to be clear, I talk about the CRISPR example in Blueprint. I actually talk about how these technologies, again, my lens on it is how these technologies are going to change how we interact with each other. And it goes back to the example we were talking about at the beginning. When we invented cities, that was a technology that changed how we interacted with each other. So human beings, for a very long time, had been inventing, when we invented weapons, that was a technology that changed how we interact with each other. So we have previously done this kind of thing. We've invented a technology that changed how we interact with each other, and I'm very interested in the – and discussed some of those implications. Yeah, I'm incredibly interested in this because I love to study history, and I love to study how crazy the world was 4,000, 5,000 years ago, a thousand years ago, and what it's going to be like in the future. I just think our understanding of the consequences of our actions is so different than anybody has ever had before. We have just such a broader – first of all, we have examples from all over the world now that we can study very closely, which I don't think really was available to that many people up until fairly recently. You mean – I'm sorry, you're saying the examples are more numerous, or a capacity to discern them is higher? Our capacity to discern them and just our in-depth understanding of these various cultures all over the world. Like, what have you been telling me today about these – the divers and others? We just have so much more data, and so much more of an understanding than ever before. I love the idea that we are – I mean, I believe that this is probably the best time ever to be alive, and I think that it's probably – I think that's true. I think there's certainly a lot of terrible things that are wrong in the world today. Also true. But I think that there's less of that and more good than there's ever been before. I agree with that too. No, I think that's right. One of the arguments that I make is this is a kind of Steven Pinker argument that you're outlining, which is, you know, with the emergence of – I mean, people are living longer than they ever have on the whole planet, fewer people in starvation, we have less violence. I mean, every indicator of human wellbeing is up. And it's partly due, or largely due, in the recent last thousand years to the emergence of the Enlightenment and the philosophy and the science that was guided that emerged about 300 years ago and 200 and some odd years ago and culminating in the present and continuing. So I think this is not just the kind of so-called wiggish view of history. It's not just a progressive sort of fantasy. I think it's the case that these philosophical and scientific moves that our species made in the last few hundred years has improved our wellbeing. As we've been discussing today, it's not just historical forces that are tending towards making us better off. A deeper and more ancient and more powerful force is also at work, which is natural selection. It's evolutionary and not just historical forces that are relevant to our wellbeing. And we don't just need to look to philosophers to find the path to a good life. Natural selection has equipped us with these capacities for love and friendship and cooperation and teaching and all these good things we've been discussing that also tend to a good life. So, yes, I totally agree with you. We're better off today than we've ever been on average across the world. However, it's not just that that's contributing to our wellbeing. This natural selection is literally why we are in this state now and why we are hoping this trend will continue. Yes. And we will be in this better place 50 years from now, 100 years from now. Natural selection doesn't work over those timescales, so those are historical forces. But the point is we are set up for success. Yes. You know, we are equipped with these, you know, you're given five fingers, which make it possible, and an opposable thumb, which allows you to manipulate tools. So, natural selection has given you an opposable thumb. Culture lets you use a computer. Do you worry about the circumventing of this natural process by artificial intelligence? That artificial intelligence is going to introduce some new, incredibly powerful factor into this whole chain of events, that by having sex robots and sex or robot workers, things becoming automated. Yes. I'm concerned. Well, I'm very concerned about how technology is going to affect our economy. Again, these concerns were not the first generation to face these concerns. There were similar concerns with the industrial revolution that workers were being put out of work when machines were invented. Nevertheless, work persisted. People still had jobs to do. There was a disruption. There's no doubt about it. I think Google and the information revolution and these types of robotic automation are disruptive. They're going to affect how we allocate labor and capital and data in our society. There's no doubt about all of that. I thought you were alluding to, just to check if you were, to the debate, which I don't know the answer to, on whether AI will, are we going to face a Terminator-type existence where the machines rise up and kill us all or not? Very smart people are on both sides of that debate. I read them all and I'm like, he's right. Then I read the guy that has the opposite opinion. I'm like, no, no, he's right. Then it goes back and forth. I don't know who's right. It goes back to nuance, right? Yes, it is nuance, but it's hard to know whether, and again, we're not talking over our lifetimes. We're talking over hundreds of years. Yes. Is there a time a thousand years from now when the human beings will say, what the hell were our ancestors doing inventing artificial intelligence? They're wiping us out. I don't know the answer to that question. Well, I think there's an issue also with the concept of artificial, like artificial life, artificial intelligence. I think it's going to be a life. It's just going to be a life that we've created. I don't think it's artificial. I just think it's a different kind of life. I think that we're thinking of biologically based life, of sex, reproduction in terms of the way we've always known it as being the only way that life exists. But if we can create something and that something decides to do things, it decides to recreate. Wipe us out and live on its own. Yeah, it's silicon based life form. Why not? Why does life have to be something that only exists through the multiplication of cells? Yes, that's very charitable of you. And people make that claim. Some people think that those machines in the distant future will look back at us as like one stage of evolution that will culminate in them. I've always said that we are some sort of an electronic caterpillar that doesn't know that it's going to give birth to a butterfly. We're making a cocoon and we don't even know what we're doing. That's a great metaphor. I have a hard time accepting that. Because you're a person. Yes, it's against my interest. But we're so flawed. All these things we've outlined, all the problems with us, those that go away with artificial intelligence. This is a deep philosophical question, Joe. I think it's inevitable. And I think if the single celled organisms are sitting around wondering what the future would be like, are we going to be replaced? Will they make antibiotics and kill us? Yes, what's going to happen? Yes, they are going to make antibiotics and kill us. I mean, we are so flawed. That's a great... We do pollute the ocean. We do pull the fish out of it. We do fuck up the air. We do commit genocide. There's all these things that are real. But the artificial life won't have those problems because it won't be emotionally based. It won't be biologically based. It'll just exist. That's a really good story. We're so flawed. Why not accept something so much better? No, we're not going to grant... Oh, we're very flawed. We are flawed. But like I said, we have a flawed beauty. So how are we not going to grant it? I'm not going to grant it. We are very flawed, though. We are flawed. I think it's beautiful, too. But I think vultures probably think they're beautiful, too. That's why they breed with each other. Well, they are beautiful. The point is, I think we have a flawed beauty. I'm going to stick to my principles that we are, despite our flaws, worth it. But there is something wonderful about us, and I think that wonderful creative quality is the reason why we created artificial life in the first place. It's like this lust for creation. We've had that impetus. You know, if you look at a lot of the art, whether it's the Egyptian, you know, the pyramids, or other kinds of artistic expression, we seem to have had a desire to transcend death, you know, to make things that looked like us but weren't alive forever, actually. So I mean, I think in that regard, I think you're quite right that it's not going to stop. That tendency is not going to stop. Now, you're very, as I said, charitable, positive take on the claim and your analogy to single-celled organisms, which are just, you know, but a fleeting – not a fleeting, they're still there, but a phase in our evolution, you know, is something I'm going to have to be thinking about because it's disturbing, honestly. Well, it's an objective perspective if I took myself out of the human race, which I really can't, but if I tried to fake it, I would say, oh, I see what's going on here. Yeah. We're just the face, yes. These dummies are buying iPhones and new MacBooks because they know that this is what's going to help the production of newer, more superior technology. The more we consume. It's also based, I think, in a lot of ways, our insane desire for materialism is fueling this. Yes. And it could be an inherent property of the human species that it is designed to create this artificial life, and that literally is what it's here for. And much like an ant is creating an anthill and doesn't exactly have some sort of a future plan for its kids and its 401K plan, that what we're doing is like this inherent property of being a human being, our curiosity, our wanderlust, our desire. Our culture. Yeah, all these things are built in because if you follow them far enough down the line, 100 years, 200 years, it inevitably leads to artificial life. Yes. I think that's possible. And of course, we're not going to be alive to test that idea. Maybe we will. Maybe it's crisper and all this crazy shit that's coming down the line. No, no, come on, Joe. I don't think so. No. It's a innovation. People always have been saying, if you go back every decade, people saying, just around the corner, just around the corner, these things take forever. They're very hard. Biological systems are very hard to engineer. And of course, the people who do that kind of work will often, I think a lot of them engage in snake oil. They want to fund their research. Sure. But I think it's entirely possible that there's a 20-year-old listening to this podcast right now. It will be 150. Yes, that's possible. Maybe a lot more than that.