Joe Rogan Asks Futurist if He's Scared of Artificial Intelligence

23 views

5 years ago

0

Save

Jamie Metzl

2 appearances

Jamie Metzl is a futurist, author, and founder of OneShared.World.

Comments

Write a comment...

Transcript

That's what I was kind of getting at. Are you concerned at all with artificial life? Are you concerned about the propagation of artificial intelligence? Well there are different kinds of artificial life. So one is artificial intelligence and I know people like Elon Musk and late Stephen Hawking are afraid. Terrified. Yeah. And I think that we need, whether it's right or not, I think it's great for us to focus on those risks. If we just say, oh, that's crazy and we don't focus on it, it increases the likelihood of these bad things happening. So kudos to Elon Musk. But I also think that we're a long way away from that threat. And we will be enormous beneficiaries of these technologies and that's why, I don't want to sound like a broken record, but that's why I keep saying it's all about values. I think we should take those threats very seriously and then say- Well values are so abstract and we don't agree on them. That's true, but like Elon Musk, I mean they've set up this institute where to say, well, what are the dangers? Right. And then what are the things that we can do now? What are standards that we can integrate, for example, into our computer programming? And so I mentioned my World Health Organization committee. The question is, well, what are the standards that we can integrate into scientific culture that's not going to cure everything but may increase the likelihood of it better rather than worse outcome? But isn't there an inherent danger in other companies or other countries rather not completely complying with any standards that we set because it would be anti-competitive? Yes. Like that would somehow or another diminish competition or diminish their competitive edge. Yes, it's true. And that's the balance that we're going to need to hold. And it's really hard, but we have a window of opportunity now to try to get ahead of that. And like I said, we have chemical weapons, biological weapons, nuclear weapons, where we've had international standards that have roughly held. I mean, there was a time when slavery was the norm and there was a movement to say, this is wrong and it was largely successful. So we have history of being more successful rather than less. And I think that's the goal. But you're right. I mean, this is a race between the technology and the best values. My real concern about artificial intelligence is that this paradigm shifting moment will happen before we recognize it's happening and then it'll be too late. Yes. That's exactly right. That's why I've written the book. That's why I'm out on the road so much talking to people. Why it's such an honor for me to be and pleasure for me to be here with you talking about it because we have to reach out to people and people can't be afraid of entering this conversation because it feels too technical or it feels like it's somebody else's business. This is all of our business because this is all of our lives and it's all of our futures. So if in the future you think 20 years, the thing that's going to really change the most is predictive genetics and to be able to predict accurately a person's health. What do you think is going to be the biggest detriment for all this stuff and the thing that we have to avoid the most? So one is, as I mentioned, this determinism, just because if we just kind of take our sense of wonder about what it means to be a human away, that's really going to harm us. We talked about equity and access to these technologies and the technologies don't even need to be real in order to have a negative impact. So in India, there are no significant genetic differences between people in different casts, but the caste system has been maintained for thousands of years because people just have accepted these differences. So it's a whole new way of understanding what is a human and it's really going to be complicated and we aren't ready for it. We aren't ready for it culturally. We aren't ready for it educationally. Certainly our political leaders aren't paying much of any attention to all of this. We have a huge job. So when you sit down and you give this speech to Congress, what are you anticipating from them in terms of like, do you think that there's anything that they can do now to take certain steps? Yes, absolutely. Yes. So a few things. We have a national education campaign. I mean, this is so important. I would say it's on the future of genetics revolution and of AI because I think it's crazy that we aren't focusing on these. Like I learned French in grade school and high school and I'm happy to speak French, but I would rather have people say this is really important stuff. So that's number one. Number two is we need to make sure that we have a functioning regulatory system in this country, in every country. And I do a lot of comparative work and like the United Kingdom, they're really well organized. They have a national healthcare system, which allows them at a national level to kind of think about long-term care and trade-offs. In this country, the average person changes health plans every 18 months. And I was talking with somebody the other night and they were working on a predictive health company. And they said their first idea was they were going to sell this information to health insurers because like, wouldn't this be great if you could, if you're a health insurer and you could, you had somebody who was your client and you could say, Hey, here's some information you can live healthier and you're not going to have this disease 20 years from now. And what he found out is the health insurers, they could have cared less because people were just, they were only going to be part of it for a year and a half. So we really need to think differently about how do we invest in people over the course of their lives? And certainly education is one, but thinking long-term about health and wellbeing is another.