Ben Goertzel On Why Artificial Intelligence Should NOT Reflect Human Values

39 views

7 years ago

0

Save

Dr. Ben Goertzel

1 appearance

Dr. Ben Goertzel is the founder and CEO of SingularityNET, a blockchain-based AI marketplace.

Comments

Write a comment...

Transcript

Is the issue that the initial creation would be subject to our programming but that it could perhaps program something more efficient and design something like if you build creativity You have to I mean general generalization is about creativity, right? Yeah, but is the issue that it would choose to not accept our values which it might find clearly We'll choose not to accept our values and we want it to choose not to accept all of our values So it's more a matter of whether the ongoing creation evolution of new values Occurs with some continuity and respect for the previous ones So I mean that with I have four human kids now one is a baby with the other three are adults, right? And with each of them I took the approach of trying to teach the kids what my values were Not just by preaching at them by entering with them into shared situations But then you know when your kids grow up They're gonna go in their own different directions, right? And these are humans But you but they all have the same sort of biological needs, which is one of the most Yet there still is an analogy I think the AI is that we create you can think of us as our mind children and We're starting them off with our culture and values if we do it properly or at least with a certain subset of the whole diverse self-contradictory mess of human culture and values, but you know, they're going to evolve in a different Direction, but you want that evolution to take place in a reflective and and and carrying way rather than the heedless way Because if you think about it the average human a thousand years ago or even 50 years ago would have thought you and me were like hopelessly Immoral miscreants who would abandon all the valuable think things in life, right? I mean I'm an infidel right? I don't I don't I haven't gone to church Ever I guess I mean my my mother's lesbian, right? I mean there's all these things that we take for granted now that not that long ago We're completely against what most humans considered maybe the most important values of life I mean human values itself is completely a moving a moving target, right and moving in our generation Yeah, yeah, yeah in our generation pretty radically very radically when I think back like to my Childhood I lived in New Jersey for nine years of my childhood and just the level of racism and anti-semitism and sexism that were just Ambient and taken for granted then I mean was this was this when you're between I think it was with same age. We're both. Yeah, I'm both. Yeah. Yeah. Yeah born in 66. I lived in Jersey from 73 to 82. Okay, so I was there from 67 to 73 oh, yeah. Yeah, right. So yeah, I mean like my I mean My sister went to the high school prom with a with a black guy And so we got our car turned upside down the windows of our house smashed and it was like a human Humongous thing and it's almost unbelievable now right cuz now No one would care care whatsoever. It's just it's just life, right? Certainly there's some fringe parts of this. Yeah. Yeah, but but still The point is there is no fixed list of Human values. It's an ongoing evolving process and what you want is for the evolution of the AI's values to be Coupled closely with the evolution of human values rather than going off in some Otherly different direction that we can't even understand, but this is literally playing God, right? I mean if you're talking about like trying to program in values, I don't think you can program in values That fully you can program in a system for Learning and growing values and here again the analog analogy with human kids is not hopeless like telling Telling your kids. These are the ten things that are important doesn't work that well, right? What works better is you enter into shared situations with them? They see how you deal with the situations you guide them in dealing with real Situations and that forms their system of values and this is what needs to happen with AI's they need to grow up entering into real-life situations With human beings so that the real-life patterns of human values Which are worth a lot more than the the homilies that we and that enunciate formally But the real-life pattern of human values gets inculcated like into the intellectual DNA of the AI systems And this is part of what worries me about the way the AI field is going at this moment because I mean most of the really powerful Narrow AI's on the planet now are involved with like selling people stuff They don't need spying on people or like figuring out who should be killed or otherwise abused by some government, right? So if if the early stage AI's that we build Turn into general intelligences gradually and these general intelligences are you know? Spy agents and advertising agents then like what what what mindset do these early stage AI's have as they grow up?