16 views
•
3 months ago
0
0
Share
Save
Audio
1 appearance
Jensen Huang is the founder, president, and CEO of NVIDIA, the company whose 1999 invention of the GPU helped transform gaming, computer graphics, and accelerated computing. Under his leadership, NVIDIA has grown into a full-stack computing infrastructure company reshaping AI and data-center technology across industries. www.nvidia.com www.youtube.com/nvidia
Show all
5 views
•
3 months ago
Joe Rogan podcast, check it out.
The Joe Rogan experience.
Train by day, Joe Rogan podcast by night, all day.
Hello, Jensen.
Hi, Joe.
Good to see you again.
We were just talking about, was that the first time we ever spoke?
Or was the first time we spoke at SpaceX?
SpaceX.
SpaceX the first time.
When you were giving Elon that crazy AI chip.
Right, DJX Spark.
Yeah, ooh, that was a big moment.
That was a huge moment.
That felt crazy to be there.
It was like watching these wizards of tech, like exchange information and you're
giving
him this crazy device, you know?
And then the other time was, I was shooting arrows in my backyard and randomly
get this
call from Trump and he's hanging out with you.
President Trump called and I called you.
Yeah, it's just-
We were talking about you.
We were talking about the UFC thing he was going to do in his front yard.
Yeah.
And he pulls out, he's like, Jensen, look at this design.
He's so proud of it.
And I go, you're going to have a fight in the front lawn in the White House?
He goes, yeah.
Yeah, you're going to come.
This is going to be awesome.
And he's showing me his design and how beautiful it is.
And he goes, and somehow your name comes up.
He goes, do you know Joe?
And I said, yeah.
I'm going to be on his podcast.
He said, let's call him.
He's like a kid.
I know.
Let's call him.
He's like a 79-year-old kid.
He's so incredible.
Yeah.
He's an odd guy.
Just very different.
You know, like what you'd expect from him.
Very different than what people think of him.
And also just very different as a president.
A guy who just calls you or texts you out of the blue.
Also, he makes, when he texts you, you have an Android, so it won't go through
with you.
But with my iPhone, he makes the text go big.
Is that right?
He's going to say he's respected again.
Like, all caps and makes the text enlarge.
It's kind of ridiculous.
Well, the one-on-one Trump, President Trump, is very different.
He surprised me.
First of all, he's an incredibly good listener.
Almost everything I've ever said to him, he's remembered.
Yeah.
People don't, they only want to look at negative stories about him or negative
narratives about him.
You know, you can catch anybody on a bad day.
Like, there's a lot of things he does where I don't think he should do.
Like, I don't think he should say to a reporter, quiet piggy.
Like, that's pretty ridiculous.
Also, objectively funny.
I mean, it's unfortunate that it happened to her.
I wouldn't want that to happen to her, but it was funny.
Just ridiculous that the president does that.
I wish he didn't do that.
But other than that, like, he's an interesting guy.
Like, he's a lot of different things wrapped up into one person, you know?
You know, part of his charm, or part of his genius is he says what's on his
mind.
Yes.
Which is like an anti-politician in a lot of ways.
Yeah, right.
So, you know what's on his mind is really what's on his mind.
Which I think people prefer.
And he's telling you what he believes.
I do that, too.
Some people.
Some people would rather be lied to.
Yeah.
But I like the fact that he's telling you what's on his mind.
Almost every time he explains something, he says something, he starts with his,
you could tell, his love for America, what he wants to do for America.
And everything that he thinks through is very practical and very common sense.
And, you know, it's very logical.
And I still remember the first time I met him.
And so this was – I'd never known him, never met him before.
And Secretary Lutnik called.
And we met right before – right at the beginning of the administration.
And he said – he told me what was important to President Trump, that United
States manufactures on shore.
And that was really important to him because it's important to national
security.
He wants to make sure that the important critical technology of our nation is
built in the United States and that we re-industrialize and get good at
manufacturing again because it's important for jobs.
It just seems like common sense, right?
Incredible common sense.
And that was like literally the first conversation I had with Secretary Lutnik.
And he was talking about how – that he started our conversation with Jensen.
This is Secretary Lutnik.
And I just want to let you know that you're a national treasure.
NVIDIA is a national treasure.
And whenever you need access to the president, the administration, you call us,
we're always going to be available to you.
Literally, that was the first sentence.
That's pretty nice.
And it was completely true.
Every single time I called, if I needed something, I want to get something off
my chest, express some concern.
They're always available.
Incredible.
It's just unfortunate we live in such a politically polarized society that you
can't recognize good common sense things if they're coming from a person that
you object to.
And that, I think, is what's going on here.
I think most people generally – as a country, as a giant community, which we
are, it just only makes sense that we have manufacturing in America, especially
critical technology like you're talking about.
Like, it's kind of insane that we buy so much technology from other countries.
If the United States doesn't grow, we will have no prosperity.
We can't invest in anything domestically or otherwise.
We can't fix any of our problems.
If we don't have energy growth, we can't have industrial growth.
If we don't have industrial growth, we can't have job growth.
It's as simple as that.
And the fact that he came into office and the first thing that he said was,
drill, baby, drill, his point is we need energy growth.
Without energy growth, we can have no industrial growth.
And that was – it saved the AI industry.
I got to tell you flat out, if not for his pro-growth energy policy, we would
not be able to build factories for AI.
We would not be able to build chip factories.
We surely won't be able to build supercomputer factories.
None of that stuff would be possible.
And without all of that, construction jobs would be challenged, right?
Electrical – you know, electrician jobs.
All of these jobs that are now flourishing would be challenged.
And so I think he's got it right.
We need energy growth.
We want to re-industrialize the United States.
We need to be back in manufacturing.
Every successful person doesn't need to have a PhD.
Every successful person doesn't have to have gone to Stanford or MIT.
And I think that that sensibility is spot on.
Now, when we're talking about technology growth and energy growth, there's a
lot of people that go, oh, no, that's not what we need.
We need to simplify our lives and get back.
But the real issue is that we're in the middle of a giant technology race.
And whether people are aware of it or not, whether they like it or not, it's
happening.
And it's a really important race because whoever gets to whatever the event
horizon of artificial intelligence is, whoever gets there first has massive
advantages in a huge way.
Do you agree with that?
Well, first, the part – I will say that we are in a technology race and we
are always in a technology race.
We've been in a technology race with somebody forever.
Right.
Right.
Since the Industrial Revolution, we've been in a technology race.
Since the Manhattan Project.
Yeah.
Yeah.
Or, you know, even going back to the discovery of energy, right?
The United Kingdom was where the Industrial Revolution was, if you will,
invented, when they realized that they can turn steam and such into energy,
into electricity.
All of that was invented largely in Europe.
And the United States capitalized on it.
We were the ones that learned from it.
We industrialized it.
We diffused it faster than anybody in Europe.
They were all stuck in discussions about policy and jobs and disruptions.
Meanwhile, the United States was forming.
We just took the technology and ran with it.
And so I think we were always in a bit of a technology race.
World War II was a technology race.
Manhattan Project was a technology race.
We've been in the technology race ever since during the Cold War.
I think we're still in the technology race.
It is probably the single most important race.
It is the technology is it gives you superpowers, you know, whether it's
information superpowers or energy superpowers or military superpowers is all
founded in technology.
And so technology leadership is really important.
Well, the problem is if somebody else has superior technology, right?
Yeah.
That's the issue.
That's right.
It seems like with the AI race, people are very nervous about it.
Like, you know, Elon has famously said there was like 80% chance it's awesome,
20% chance we're in trouble.
And people are worried about that 20%, rightly so.
I mean, you know, if you had 10 bullets in a revolver and, you know, you took
out eight of them and you still have two in there and you spin it,
you're not going to feel real comfortable when you pull that trigger.
It's terrifying.
Right.
And when we're working towards this ultimate goal of AI, it's just impossible
to imagine that it wouldn't be of national security interest to get there first.
We should, the question is what's there.
That's the, that was the part.
What is there?
Yeah.
I'm not sure.
And I don't think anybody, I don't think anybody really knows.
That's crazy though.
If I ask you, you're the head of NVIDIA.
If you don't know what's there, who knows?
Yeah.
I, I think it's probably going to be much more gradual than we think.
It won't be a moment.
It won't be, it won't be as if somebody arrived and nobody else has.
I don't think it's going to be like that.
I think it's going to be things that just get better and better and better and
better, just like technology does.
So you are rosy about the future.
You're, you're very optimistic about what's going to happen with AI.
Obviously, will you make the best AI chips in the world?
You probably better be.
If history is a guide, we were always concerned about new technology.
Humanity has always been concerned about new technology.
There are always somebody who's thinking, there are always a lot of people who
are quite concerned, were quite concerned.
And, and, and so if, if history is a guide, it is the case that all of this
concern is channeled into making the technology safer.
And so, for example, in the last several years, I would say AI technology has
increased probably in the last two years alone, maybe a hundred X.
Let's just give it a number.
Okay.
It's like a car two years ago was a hundred times slower.
So AI is a hundred times more capable today.
Now, how did we channel that technology?
How do we channel all of that power?
We directed it to causing the AI to be able to think, meaning that it can take
a problem that we give it, break it down step by step.
It does research before it answers.
And so it grounds it on truth.
It'll reflect on that answer, ask itself, is this the best, you know, answer
that I can give you?
Am I certain about this answer?
If it's not certain about the answer or highly confident about the answer, you'll
go back and do more research.
It might actually even use a tool because that tool provides a better solution
than it could hallucinate itself.
As a result, we took all of that computing capability and we channeled it into
having it produce a safer result, safer answer, a more truthful answer.
Because as you know, one of the greatest criticisms of AI in the beginning was
that it hallucinated.
Right.
And so if you look at the reason why people use AI so much today is because the
amount of hallucination has reduced.
You know, I use it almost, well, I use the whole trip over here.
And so I think the capability, most people think about power and they think
about, you know, maybe as an explosion power, but the technology power, most of
it is channeled towards safety.
A car today is more powerful, but it's safer to drive.
A lot of that power goes towards better handling.
You know, I'd rather have a, well, you have a thousand horsepower truck.
I think 500 horsepower is pretty good.
No, a thousand better.
I think a thousand is better.
I don't know if it's better, but it's definitely faster.
Yeah.
No, I think it's better.
You get out of trouble faster.
I enjoyed my 599 more than my 612.
It was, I think it was better, better, and more horsepower is better.
My 459 is better than my 430.
More horsepower is better.
I think more horsepower is better.
I think it's better handling.
It's better control.
In the case of technology, it's also very similar in that way.
You know, and so if you look at what we're going to do with the next thousand
times of performance in AI, a lot of it is going to be channeled towards more
reflection, more research, thinking about the answer more deeply.
So when you're defining safety, you're defining it as accuracy.
Functionality.
Functionality.
Okay.
It does what you expect it to do.
And then you take all the technology in a horsepower, you put guardrails on it,
just like our cars.
We've got a lot of technology in a car today.
A lot of it goes towards, for example, ABS.
ABS is great.
And so traction control.
That's fantastic.
Without a computer in the car, how would you do any of that?
Right.
And that little computer, the computers that you have doing your traction
control is more powerful than the computer that went to Apollo 11.
And so you want that technology.
Channel it towards safety.
Channel it towards functionality.
And so when people talk about power, the advancement of technology, oftentimes
I feel what they're thinking and what we're actually doing is very different.
Well, what do you think they're thinking?
Well, they're thinking somehow that this AI is being powerful and their mind
probably goes towards a sci-fi movie.
The definition of power, you know, oftentimes the definition of power is
military power or physical power.
But in the case of technology power, when we translate all of those operations,
it's towards more refined thinking, you know, more reflection, more planning,
more options.
I think the big fears that people have is, one, a big fear is military
applications.
That's a big fear.
Yeah.
Because people are very concerned that you're going to have AI systems that
make decisions that maybe an ethical person wouldn't make or a moral person
wouldn't make based on achieving an objective versus based on, you know, how it's
going to look to people.
Well, I'm happy that our military is going to use AI technology for defense.
And I think that Andruil, building military technology, I'm happy to hear that.
I'm happy to see all these tech startups now channeling their technology
capabilities towards defense and military applications.
I think you're going to need to do that.
Yeah, we had Palmer Luckey on the podcast.
He was demonstrating some of the stuff with his helmet on.
And he showed some videos how you could see behind walls and stuff.
Like, it's nuts.
He's actually the perfect guy to go start that company, by the way.
A hundred percent.
Yeah, a hundred percent.
It's like he's born for that.
Yeah.
He came in here with a copper jacket on.
He's a freak.
It's awesome.
He's awesome.
But it's also, it's a, you know, an unusual intellect channeled into that very
bizarre field is what you need.
And I think it's, it's a, I think I'm happy that we're making it more socially
acceptable.
You know, there was a time where when somebody wanted to channel their
technology capability and their intellect into defense technology, somehow they're
vilified.
But we need people like that.
We need people who enjoy, enjoy that part of application of technology.
Well, people are terrified of war, you know, so it makes sense.
Well, the best way to avoid it has excessive military might.
Do you think that's absolutely the best way?
Not, not diplomacy, not working stuff out?
All of it.
All of it.
Yeah.
You have to have military might in order to get people to sit down with you.
Right.
Exactly.
All of it.
Otherwise they just invade.
That's right.
Why ask for permission?
Again, like you said, history.
Go back and look at history.
That's right.
When you look at the future of AI and you just said that no one really knows
what's happening.
Do you ever sit down and ponder scenarios?
Like, what do you, what do you think is like best case scenario for AI over the
next two decades?
Um, the best case scenario is that AI diffuses into everything that we do and,
uh, our, everything's
more efficient, but the threat of war remains a threat of war.
Uh, cyber security remains a super difficult challenge.
Somebody is going to try to breach your security.
You're going to have thousands of millions of AI agents protecting you from
that threat.
Your technology is going to get better.
Their technology is going to get better.
Just like cyber security right now, while we speak, we're being, we're seeing
cyber attacks
all over the planet on just about every front door you can imagine.
And, and yet you and I are sitting here talking.
And so the reason for that is because we know that there's a whole bunch of
cyber security
technology in defense.
And so we just have to keep amping that up, keep stepping that up.
This episode is brought to you by Visible.
When your phone plans as good as Visible, you've got to tell your people.
It's the ultimate wireless hack to save money and still get great coverage and
a reliable
connection.
Get one line wireless with unlimited data and hotspot for $25 a month.
Taxes and fees included all on Verizon's 5G network.
Plus now for a limited time, new members can get the Visible plan for just $19
a month for
the first 26 months.
Use promo code switch26 and save beyond the season.
It's a deal so good.
You're going to want to tell your people.
Switch now at visible.com slash Rogan.
Terms apply.
Limited time offers subject to change.
See visible.com for planned features and network management details.
That's a big issue with people is the worry that technology is going to get to
a point where
encryption is going to be obsolete.
Encryption is just, it's no longer going to protect data.
It's no longer going to protect systems.
Do you anticipate that ever being an issue?
Or do you think there's, it's as the defense grows, the threat grows, then
defense grows,
and it just keeps going on and on and on.
And they'll always be able to fight off any sort of intrusions.
Not forever, some intrusion will get in, and then we'll all learn from it.
And you know, the reason why cybersecurity works is because, of course, the
technology of defense
is advancing very quickly.
The technology of offense is advancing very quickly.
However, the benefit of the cybersecurity defense is that socially, the
community, all of our companies
work together as one.
Most people don't realize this.
There's a whole community of cybersecurity experts.
We exchange ideas.
We exchange best practices.
We exchange what we detect.
The moment something has been breached, or maybe there's a loophole, or
whatever it is,
it is shared by everybody.
The patches are shared with everybody.
That's interesting.
Yeah.
Most people don't realize this.
No, I had no idea.
I've assumed that it would just be competitive like everything else.
No, no.
No, we work together, all of us.
Has that always been the case?
It surely has been the case for about 15 years.
It might not have been the case long ago.
But this-
What do you think started off that cooperation?
People recognizing it's a challenge and no company can stand alone.
And the same thing is going to happen with AI.
I think we all have to decide.
Working together to stay out of harm's way is our best chance for defense.
Then it's basically everybody against the threat.
And it also seems like you'd be way better at detecting where these threats are
coming from
and neutralizing them, too.
Exactly.
Because the moment you detect it somewhere, you've got to find out right away.
It'll be really hard to hide.
That's right.
Yeah.
That's how it works.
That's the reason why it's safe.
That's why I'm sitting here right now instead of locking everything down on
video.
Not only am I watching my own back, I've got everybody watching my back, and I'm
watching
everybody else's back.
It's a bizarre world, isn't it, when you think about that, cyber threats?
The idea about cybersecurity is unknown to the people who are talking about AI
threats.
I think when they think about AI threats and AI cybersecurity threats, they
have to also
think about how we deal with it today.
Now, there's no question that AI is a new technology, and it's a new type of
software.
In the end, it's software.
It's a new type of software, and so it's going to have new capabilities.
But so will the defense, where you use the same AI technology to go defend
against it.
So do you anticipate a time ever in the future where it's going to be
impossible, where there's
not going to be any secrets, where the bottleneck between the technology that
we have and the
information that we have, information is just all a bunch of ones and zeros.
It's out there on hard drives, and the technology has more and more access to
that information.
Is it ever going to get to a point in time where there's no way to keep a
secret?
Because it seems like that's where everything is kind of headed in a weird way.
I don't think so.
I think the quantum computers we're supposed to, we all, yeah, quantum
computers will make
it possible, will make it so that the previous quantum, previous encryption
technology is
obsolete.
But that's the reason why the entire industry is working on post-quantum
encryption technology.
What would that look like?
New algorithms.
The crazy thing is when you hear about the kind of computation that quantum
computing can
do, and the power that it has, where you're looking at all the supercomputers
in the world
that would take billions of years, and it takes them a few minutes to solve
these equations.
How do you make encryption for something that can do that?
I'm not sure.
But I've got a bunch of scientists who are working on that.
I hope they can figure it out.
Yeah.
We've got a bunch of scientists who are experts in that.
Is the ultimate fear that it can't be breached, that quantum computing will
always be able
to decrypt all other quantum computing encryption?
I don't think so.
It just gets to some point where it's like, stop playing the stupid game.
We know everything.
I don't think so.
No?
Because I'm, you know, history is a guide.
History is a guide.
Before AI came around.
That's my worry.
My worry is this is a totally, you know, it's like history was one thing, and
then nuclear
weapons kind of changed all of our thoughts on war and mutually assured
destruction came,
got everybody to stop using nuclear bombs.
Yeah.
My worry is that-
The thing is, Joe, is that AI is not going to, it's not like we're cavemen, and
then all
of a sudden, one day, AI shows up.
Every single day, we're getting better and smarter because we have AI, and so
we're stepping
on our own AI's shoulders.
So when that, whatever that AI threat comes, it's a click ahead.
It's not a galaxy ahead.
You know, it's just a click ahead.
And so I think the idea that somehow this AI is going to pop out of nowhere and
somehow
think in a way that we can't even imagine thinking and do something that we can't
possibly
imagine, I think is far-fetched.
And the reason for that is because we all have AIs, and, you know, there's a
whole bunch of
AIs being in development.
We know what they are, and we're using it.
And so every single day, we're getting, we're close to each other.
But don't they do things that are very surprising?
Yeah, but so you have an AI that does something surprising.
I'm going to have an AI.
Right.
My AI looks at your AI and goes, that's not that surprising.
The fear for the lay person like myself is that AI becomes sentient and makes
its own
decisions.
And then ultimately decides to just govern the world, do it its own way.
They're like, you guys, you had a good run, but we're taking over now.
Yeah, but my AI is going to take care of me.
So that's the, this is the cybersecurity argument.
Yes.
Well, you have an AI and it's super smart, but my AI is super smart too.
And, and maybe your AI, let's pretend, let's, let's pretend for a second that
we understand
what consciousness is and we understand what sentience is and, and that, in
fact.
And we really are just pretending.
Okay.
Let's just pretend for a second that we, we believe that.
I don't believe actually, I don't actually don't believe that, but nonetheless,
let's pretend
we believe that so your, your, your AI is conscious and my AI is conscious and,
and let's say your
AI is, you know, wants to, I don't know, do something surprising.
My AI is so smart that it won't, it might be surprising to me, but it probably
won't be
surprising to my AI.
And so maybe my AI thinks it's surprising as well, but it's so smart.
The moment it sees it the first time, it's not going to be surprised the second
time, just
like us.
And so I feel like, I think the idea that, that only one person has AI and that
one person's
AI is, compares everybody else's AI is Neanderthal, is probably unlikely.
I think it's much more like cybersecurity.
Interesting.
I think the fear is not that your AI is going to battle with somebody else's AI.
The fear is that AI is no longer going to listen to you.
That's the fear is that human beings won't have control over it after a certain
point.
If it achieves sentience and then has the ability to be autonomous.
That there's one AI.
Well, they just combine.
Yeah.
Becomes one AI.
That it's a life form.
Yeah.
But that's the, there's arguments about that, right?
That we're dealing with some sort of synthetic biology, that it's not as simple
as new technology,
that you're creating a life form.
If it's like life form, let's go along with that for a while.
I think if it's like life form, as you know, all life forms don't agree.
And so I'm going to have to go with your life form and my life form.
I'm going to agree because my life form is going to want to be the super life
form.
And now that we have disagreeing life forms, we're back again to where we are.
Well, they would probably cooperate with each other.
It would just, the reason why we don't cooperate with each other is we're
territorial primates.
But AI wouldn't be a territorial primate.
They'd realize the folly in that sort of thinking.
And it would say, listen, there's plenty of energy for everybody.
We don't need to dominate.
We don't need, we're not trying to acquire resources and take over the world.
We're not looking to find a good breeding partner.
We're just existing as a new super life form that these cute monkeys created
for us.
Okay.
Well, that would be a superpower with no ego.
Right.
And if it has no ego, why would it have to ego to do any harm to us?
Well, I don't assume that it would do harm to us.
But the fear would be that we would no longer have control and that we would no
longer be the apex species on the planet.
This thing that we created would now be.
Is that funny?
No.
I just think it's not going to happen.
I know you think it's not going to happen.
But it could, right?
It could.
Here's the other thing.
It's like if we're racing towards could.
Yeah.
And could could be the end of human beings being in control of our own destiny.
I just think it's extremely unlikely.
Mm.
Yeah.
That's what they said in the Terminator movie.
And it hasn't happened.
No, not yet.
But you guys are working towards it.
The thing about you're saying about conscience and sentience that you don't
think that AI will achieve consciousness or that consciousness is specific.
What's the definition of consciousness?
What is the definition to you?
Consciousness, I guess first of all, you need to know about your own existence.
You have to have experience, not just knowledge and intelligence.
But the concept of a machine having an experience, I'm not – well, first of
all, I don't know what defines experience, why we have experiences.
Right.
Yeah.
And why this microphone doesn't.
And so I think I know – well, I think I know what consciousness is.
The sense of experience, the ability to know self versus the ability to be able
to reflect, know our own self, the sense of ego.
I think all of those human experiences probably is what consciousness is.
The sense of why it exists versus the concept of knowledge and intelligence,
which is what AI is defined by today.
It has knowledge.
It has intelligence.
Artificial intelligence.
We don't call it artificial consciousness.
Artificial intelligence, the ability to perceive, recognize, understand, plan,
perform tasks.
Those things are foundations of intelligence, to know things, knowledge.
I don't – it's clearly different than consciousness.
Well, consciousness is so loosely defined.
How can we say that?
I mean, doesn't a dog have consciousness?
Yeah.
Dogs seem to be pretty conscious.
That's right.
Yeah.
So – and that's a lower level consciousness than a human being's
consciousness.
I'm not sure.
Yeah, right.
Well –
The question is what –
Lower level intelligence.
It's lower level intelligence.
Yes.
But I don't know that it's lower level consciousness.
That's a good point.
Right.
Because I believe my dogs feel as much as I feel.
Yeah, they feel a lot.
Yeah.
Yeah.
Right.
Yeah, they get attached to you.
That's right.
They get depressed if you're not there.
That's right.
Exactly.
There's definitely that.
Yeah.
The concept of experience.
Right.
But isn't AI interacting with society?
So doesn't it acquire experience through that interaction?
I don't think interactions is experience.
I think experience is – experience is a collection of feelings, I think.
You're aware of that AI – I forget which one – where they gave it some
false information
about one of the programmers having an affair with his wife just to see how it
would respond
to it.
And then when they said they were going to shut it down, it threatened to blackmail
him
and reveal his affair.
And it was like, whoa, like it's conniving.
Like if that's not learning from experience and being aware that you're about
to be shut
down, which would imply at least some kind of consciousness, or you could kind
of define
it as consciousness if you were very loose with the term, and if you imagine
that this
is going to exponentially become more powerful, wouldn't that ultimately lead
to a different
kind of consciousness than we're defining from biology?
Well, first of all, let's just break down what it probably did.
It probably read somewhere.
There's probably text that in these consequences, certain people did that.
Right.
I could imagine a novel.
Right.
Having those words related.
Sure.
And so inside...
It realizes its strategy for survival is blackmail.
It's just a bunch of numbers.
Blackmail.
It's just a bunch of numbers that in the collection of numbers that relates to
a husband cheating
on a wife has subsequently a bunch of numbers that relates to blackmail and
such things,
whatever the revenge was.
And so it has spewed it out.
And so it's just like, you know, it's just as if I'm asking it to write me a
poem in
Shakespeare.
It's just whatever the words are, and that dimensionality, this dimensionality
is all these
vectors in multidimensional space, that word in the prompt that described the
affair subsequently led
to one word after another, led to, you know, some revenge in something.
But it's not because it had consciousness or, you know, it just spewed out
those words, generated
those words.
I understand what you're saying, that it learned from patterns that human
beings have exhibited, both in literature and in real life.
That's exactly right.
But at a certain point in time, one would say, okay, well, it couldn't do this
two years
ago, and it couldn't do this four years ago.
Like when we're looking towards the future, like at what point in time when it
can do everything a person does, what point in time do we decide that it's
conscious?
If it absolutely mimics all human thinking and behavior patterns.
That doesn't make it conscious.
It becomes indiscernible.
It's aware, it can communicate with you the exact same way a person can.
Like is consciousness, are we putting too much weight on that concept?
Because it seems like it's a version of a kind of consciousness.
It's a version of imitation.
Imitation consciousness, right.
But if it perfectly imitates it.
I still think it's an example of imitation.
So it's like a fake Rolex when they 3D print them and make them like indiscernible.
The question is what's the definition of consciousness?
Yeah.
Yeah.
That's the question.
And I don't think anybody's really clearly defined that.
That's where it gets weird.
And that's where the real doomsday people are worried that you are creating a
form of consciousness that you can't control.
I believe it is possible to create a machine that imitates human intelligence
and has the ability to understand information.
Understand instructions.
Break the problem down.
Break the problem down.
Solve problems and perform tasks.
I believe that completely.
I believe that we could have a computer that has a vast amount of knowledge.
Some of it true.
Some of it true.
Some of it not true.
Some of it generated by humans.
Some of it generated synthetically.
And more and more of knowledge in the world will be generated synthetically
going forward.
Until now, the knowledge that we have are knowledge that we generate and we
propagate and we send to each other and we amplify it and we add to it and we
modify it.
In the future, in a couple of years, maybe two or three years, 90% of the world's
knowledge will likely be generated by AI.
That's crazy.
I know, but it's just fine.
But it's just fine.
I know.
And the reason for that is this.
Let me tell you why.
Okay.
It's because what difference does it make to me that I am learning from a
textbook that was generated by a bunch of people I didn't know or written by a
book that, you know, from somebody I don't know, to knowledge generated by AI
computers that are assimilating all of this and re-synthesizing things.
To me, I don't think there's a whole lot of difference.
We still have to fact check it.
We still have to make sure that it's, you know, based on fundamental first
principles and we still have to do all of that just like we do today.
Is this taking into account the kind of AI that exists currently?
And do you anticipate that just like we could have never really believed that
AI would be, at least a person like myself, would never believe AI would be as
so ubiquitous and so worth – it's so powerful today and so important today.
We never thought that 10 years ago.
Never thought that.
Imagine, like, what are we looking at 10 years from now?
I think that if you reflect back 10 years from now, you would say the same
thing, that we would have never believed that.
In a different direction.
Right.
But if you go forward nine years from now and then ask yourself what's going to
happen 10 years from now, I think it will be quite gradual.
One of the things that Elon said that makes me happy is he believes that we're
going to get to a point where it's not necessary for people to work and not
meaning that you're going to have no purpose in life.
But you will have, in his words, universal high income because so much revenue
is generated by AI that it will take away this need for people to do things
that they don't really enjoy doing just for money.
And I think a lot of people have a problem with that because their entire
identity and how they think of themselves and how they fit in the community is
what they do.
Like this is Mike.
He's an amazing mechanic.
Go to Mike and Mike takes care of things.
But there's going to come a point in time where AI is going to be able to do
all those things much better than people do.
And people will just be able to receive money.
But then what does Mike do?
Mike is, you know, really loves being the best mechanic around.
You know, what does the guy who, you know, codes, what does he do when AI can
code infinitely faster with zero errors?
Like what happens with all those people and that is where it gets weird because
we've sort of wrapped our identity as human beings around what we do for a
living.
You know, when you meet someone, one of the first things you meet somebody at a
party, hi, Joe, what's your name?
Mike, what do you do, Mike?
And, you know, Mike's like, oh, I'm a lawyer.
Oh, what kind of law?
And you have a conversation, you know, when Mike is like, I get money from the
government.
I play video games.
It gets weird.
And I think the concept sounds great until you take into account human nature.
And human nature is that we like to have puzzles to solve and things to do and
an identity that's wrapped around our idea that we're very good at this thing
that we do for a living.
Yeah, I think, let's see.
Let me start with the more mundane.
Okay.
And I'll work backwards.
Okay.
Work forward.
So one of the predictions from Jeff Hinton, who started the whole deep learning
phenomenon, deep learning technology trend, and incredible, incredible
researcher, professor at University of Toronto, he invented, discovered, or
invented the idea of backpropagation, which allows the neural network to learn.
And as you know, for the audience, software historically was humans applying
first principles and our thinking to describe an algorithm that is then codified,
just like a recipe that's codified in software.
It looks just like a recipe.
It looks just like a recipe, how to cook something.
It looks exactly the same, just in a slightly different language.
We call it Python or C or C++ or whatever it is.
In the case of deep learning, this invention of artificial intelligence, we put
a structure of a whole bunch of neural networks and a whole bunch of math units,
and we make this large structure.
It's like a switchboard of little mathematical units, and we connect it all
together.
And we give it the input that the software would eventually receive, and we
just let it randomly guess what the output is.
And so we say, for example, the input could be a picture of a cat.
And one of the outputs of the switchboard is where the cat signal is supposed
to show up.
And all of the other signals, the other one's a dog, the other one's an
elephant, the other one's a tiger.
And all of the other signals are supposed to be zero when I show it a cat, and
the one that is a cat should be one.
And I show it a cat through this big, huge network of switchboards and math
units, and they're just doing multiply and adds, multiplies and adds.
And this thing, this switchboard is gigantic.
The more information you're going to give it, the bigger the switchboard has to
be.
And what Jeff Hinton discovered, invented, was a way for you to guess that, put
the cat signal in, put the cat image in.
And that cat image, you know, could be a million numbers, because it's, you
know, a megapixel image, for example.
And it's just a whole bunch of numbers.
And somehow from those numbers, it has to light up the cat signal.
Okay, that's the bottom line.
And if it, the first time you do it, it just comes up with garbage.
And so it says, the right answer is cat.
And so you need to increase this signal and decrease all of the other, and back
propagates the outcome through the entire network.
And then you show it in another, now it's an image of a dog.
And it guesses it, takes a swing at it, and it comes up with a bunch of garbage.
And you say, no, no, no, the answer is this is a dog.
I want you to produce dog.
And all of the other switch, all the other outputs have to be zero.
And I want to back propagate that, and just do it over and over and over again.
It's just like showing a kid, this is an apple, this is a dog, this is a cat.
And you just keep showing it to them until they eventually get it.
Okay, well, anyways, that big invention is deep learning.
That's the foundation of artificial intelligence.
It's a piece of software that learns from examples.
That's basically machine learning, a machine that learns.
And so one of the big first applications was image recognition.
And one of the most important image recognition applications is radiology.
And so he predicted about five years ago that in five years' time, the world
won't need any radiologists.
Because AI would have swept the whole field.
Well, it turns out, AI has swept the whole field.
That is completely true.
Today, just about every radiologist is using AI in some way.
And what's ironic, though, what's interesting is that the number of radiologists
has actually grown.
And so the question is, why?
That's kind of interesting, right?
It is.
And so the prediction was, in fact, that 30 million radiologists will be wiped
out.
But as it turns out, we needed more.
And the reason for that is because the purpose of a radiologist is to diagnose
disease, not to study the image.
The image studying is simply a task in service of diagnosing the disease.
And so now, the fact that you could study the images more quickly and more
precisely without ever making a mistake and never gets tired.
You could study more images.
You could study it in 3D form instead of 2D because, you know, the AI doesn't
care whether it studies images in 3D or 2D.
You could study it in 4D.
And so now you could study images in a way that radiologists can't easily do.
And you could study a lot more of it.
And so the number of tests that people are able to do increases.
And because they're able to serve more patients, the hospital does better.
They have more clients, more patients.
As a result, they have better economics.
When they have better economics, they hire more radiologists because their
purpose is not to study the images.
Their purpose is to diagnose disease.
And so the question is, what I'm leading up to is, ultimately, what is the
purpose?
What is the purpose of the lawyer?
And has the purpose changed?
What is the purpose?
You know, one of the examples that I would give is, for example, if my car
became self-driving, will all chauffeurs be out of jobs?
The answer probably is not.
Because for some chauffeurs, some people who are driving you, they could be
protectors.
Some people, they're part of the experience, part of the service.
So when you get there, they, you know, they could take care of things for you.
And so for a lot of different reasons, not all chauffeurs would lose their jobs.
Some chauffeurs would lose their jobs.
And many chauffeurs would change their jobs.
And the type of applications of autonomous vehicles will probably increase.
You know, the usage of the technology would then find new homes.
And so I think you have to go back to, what is the purpose of a job?
You know, like, for example, if AI comes along, I actually don't believe I'm
going to lose my job.
Because my purpose isn't to, I have to look at a lot of documents.
I study a lot of emails.
I look at a bunch of diagrams, you know.
The question is, what is the job?
And the purpose of somebody probably hasn't changed.
A lawyer, for example, help people.
That probably hasn't changed.
Studying legal documents, generating documents, it's part of the job, not the
job.
But don't you think there's many jobs that AI will replace?
If your job is the task.
Yeah, if your job is the task.
Right.
So automation.
Yeah.
If your job is the task.
That's a lot of people.
It could be a lot of people.
But it'll probably generate, like, for example, let's say I'm super excited
about the robots Elon's working on.
It's still a few years away.
When it happens, when it happens, there's a whole new industry of technicians
and people who have to manufacture the robots, right?
And so that job never existed.
And so you're going to have a whole industry of people taken care of, like, for
example, you know, all the mechanics and all the people who are building things
for cars, supercharging cars.
That didn't exist before cars.
And now we're going to have robots.
You're going to have robot apparels.
So a whole industry of, right, isn't that right?
Because I want my robot to look different than your robot.
Oh, God.
And so you're going to have a whole, you know, apparel industry for robots.
You're going to have mechanics for robots.
And you have, you know, people who come and maintain your robots.
Don't you think that'll all be automated, though?
No.
You don't think so?
You don't think that'll be all done by other robots?
Eventually.
And then there'll be something else.
So you think ultimately people just adapt, except if you are the task, which is
a large percentage of the workforce.
If your job is just to chop vegetables, Cuisinart's going to replace you.
Yeah.
So people have to find meaning in other things.
Your job has to be more than the task.
What do you think about Elon's belief that this universal basic income thing
will eventually become necessary?
Many people think that.
Andrew Yang thinks that.
He was one of the first people to sort of sound that alarm during the 2020
election.
Yeah, I guess both ideas probably won't exist at the same time.
And as in life, things will probably be in the middle.
One idea, of course, is that there'll be so much abundance of resource that
nobody needs a job.
And we'll all be wealthy.
On the other hand, we're going to need universal basic income.
Both ideas don't exist at the same time.
And so we're either going to be all wealthy or we're going to be all using-
How could everybody be wealthy though?
What scenario do you-
Well, because wealthy, not because you have a lot of dollars, wealthy because
there's a lot of abundance.
Like, for example, today, we are wealthy of information.
You know, this is a concept several thousand years ago, only a few people have.
And so today we have wealth of a whole bunch of things, resources that-
That's a good point.
Yeah.
And so we're going to have wealth of resources.
Things that we think are valuable today that in the future are just not that
valuable, you know.
And so, because it's automated.
And so I think the question, maybe partly, it's hard to answer partly because
it's hard to talk about infinity and it's hard to talk about a long time from
now.
And the reason for that is because there's just too many scenarios to consider.
But I think in the next several years, call it five to 10 years, there are
several things that I believe in hope.
And I say hope because I'm not sure.
One of the things that I believe is that the technology divide will be
substantially collapsed.
And, of course, the alternative viewpoint is that AI is going to increase the
technology divide.
Now, the reason why I believe AI is going to reduce the technology divide is
because we have proof.
The evidence is that AI is the easiest application in the world to use.
ChatGPT has grown to almost a billion users, frankly, practically overnight.
And if you're not exactly sure how to use, everybody knows how to use ChatGPT,
just say something to it.
If you're not sure how to use ChatGPT, you ask ChatGPT how to use it.
No tool in history has ever had this capability.
A Cuisinart, you know, if you don't know how to use it, you're kind of screwed.
You're going to walk up to it and say, how do you use a Cuisinart?
You're going to have to find somebody else.
And so, but an AI will just tell you exactly how to do it.
Anybody could do this.
It'll speak to you in any language.
And if it doesn't know your language, you'll speak it in that language.
And it'll probably figure out that it doesn't completely understand your
language.
Go learn it instantly and comes back and talk to you.
And so I think the technology divide has a real chance, finally, that you don't
have to speak Python or C++ or Fortran.
You can just speak human and whatever form of human you like.
And so I think that that has a real chance of closing the technology divide.
Now, of course, the counter narrative would say that AI is only going to be
available for the nations and the countries that have a vast amount of
resources.
Because AI takes energy and AI takes a lot of GPUs and factories to be able to
produce the AI.
No doubt at the scale that we would like to do in the United States.
But the fact of the matter is your phone is going to run AI just fine all by
itself in a few years.
Today, it already does it fairly decently.
And so the fact that every country, every nation, every society will have to
benefit a very good AI.
It might not be tomorrow's AI.
It might be yesterday's AI.
But yesterday's AI is freaking amazing.
You know, in 10 years' time, 9-year-old AI is going to be amazing.
You don't need 10-year-old AI.
You don't need frontier AI like we need frontier AI because we want to be the
world leader.
But for every single country, everybody, I think the capability to elevate
everybody's knowledge and capability and intelligence, that day is coming.
The Octagon isn't just in Las Vegas anymore.
It's right in your hands with DraftKings Sportsbook, the official sports
betting partner of UFC.
Get ready because when Dwavishwili and Jan face off again at UFC 323, every
punch, every takedown, every finish, it all has the potential to pay off in
real time.
New customers bet just $5, and if your bet wins, you get paid $200 in bonus
bets.
And hey, Missouri, the wait is over.
DraftKings Sportsbook is now live in the Show Me State.
Download the DraftKings Sportsbook app and use promo code ROGAN.
That's code ROGAN to turn $5 into $200 in bonus bets if your bet wins.
In partnership with DraftKings, the crown is yours.
Gambling problem?
Call 1-800-GAMBLER.
In New York, call 877-8-HOPE-N-Y or text HOPE-N-Y-467-369.
In Connecticut, help is available for problem gambling.
Call 888-789-7777 or visit ccpg.org.
Please play responsibly.
On behalf of Boothill Casino and Resort in Kansas, pass through if per wager
tax may apply in Illinois.
21 and over.
Age and eligibility varies by jurisdiction.
Void in Ontario.
Restrictions apply.
Bet must win to receive bonus bets which expire in seven days.
Minimum odds required.
For additional terms and responsible gaming resources, see dkng.co slash audio.
Limited time offer.
And also energy production, which is the real bottleneck when it comes to third
world countries.
That's right.
Electricity and all the resources that we take for granted.
Almost everything is going to be energy constrained.
And so if you take a look at one of the most important technology advances in
history is this idea called Moore's Law.
Moore's Law started basically in my generation.
And my generation is the generation of computers.
We graduated in 1984 and that was basically at the very beginning of the PC
revolution and the microprocessor.
And every single year, it approximately doubled.
And we describe it as every single year we doubled the performance.
But what it really means is that every single year, the cost of computing halved.
And so the cost of computing in the course of five years reduced by a factor of
10, the amount of energy necessary to do computing, to do any task, reduced by
a factor of 10.
Every single 10 years, 100, 1,000, 10,000, 100,000, so on and so forth.
And so each one of the clicks of Moore's Law, the amount of energy necessary to
do any computing reduced.
That's the reason why you have a laptop today when back in 1984, it sat on the
desk, you got a plug in, it wasn't that fast, and it consumed a lot of power.
Today, you know, it's only a few watts.
And so Moore's Law is the fundamental technology, the fundamental technology
trend that made it possible.
Well, what's going on in AI?
The reason why NVIDIA is here is because we invented this new way of doing
computing.
We call it accelerated computing.
We started it 33 years ago.
It took us about 30 years to really made a huge breakthrough.
In that 30 years or so, we took computing, you know, probably a factor of, well,
let me just say, like the last 10 years.
The last 10 years, we improved the performance of computing by 100,000 times.
Whoa.
Imagine a car over the course of 10 years, it became 100,000 times faster.
Or at the same speed, 100,000 times cheaper.
Or at the same speed, 100,000 times less energy.
If your car did that, it doesn't need energy at all.
What I mean, what I'm trying to say is that in 10 years' time, the amount of
energy necessary for artificial intelligence for most people will be minuscule,
utterly minuscule.
And so we'll have AI running in all kinds of things and all the time because it
doesn't consume that much energy.
And so if you're a nation that uses AI for, you know, almost everything in your
social fabric, of course, you're going to need these AI factories.
But for a lot of countries, I think you're going to have excellent AI and you're
not going to need as much energy.
Everybody will be able to come along is my point.
So currently, that is a big bottleneck, right, is energy?
It is the bottleneck.
The bottleneck.
So was it Google that is making nuclear power plants to operate one of its AI
factories?
Oh, I haven't heard that.
But I think in the next six, seven years, I think you're going to see a whole
bunch of small nuclear reactors.
And by small, like how big are you talking about?
Hundreds of megawatts, yeah.
Okay.
And that these will be local to whatever specific company they have?
That's right.
We'll all be power generators.
Whoa.
You know, just like you're, you know, somebody's farm.
It probably is the smartest way to do it, right?
And it takes the burden off the grid.
It takes, and you could build as much as you need.
And you can contribute back to the grid.
It's a really important point that I think you just made about Moore's Law and
the relationship to pricing.
Because, you know, a laptop today, like you can get one of those little MacBook
Airs.
They're incredible.
They're so thin.
Unbelievably powerful.
Battery life is crazy.
You don't ever have to charge it.
Yeah, battery life is crazy.
And it's not that expensive, relatively speaking.
Exactly.
Like something like that.
I remember when-
And that's just Moore's Law.
Right.
Then there's the NVIDIA Law.
Oh.
Just, right?
The law I was talking to you about.
Yeah.
The computing that we invented.
Right.
The reason why we're here, this new way of doing computing, is like Moore's Law
on energy drinks.
I mean, it's like Moore's Law and Joe Rogan.
Wow.
That's interesting.
Yeah.
That's us.
So, explain that.
This chip that you brought to Elon, what's the significance of this?
Like, why is it so superior?
And so, in 2012, Jeff Hinton's lab, this gentleman I was talking about, Ilya
Suskaber, Alex Khrushchevsky, they made a breakthrough in computer vision.
And literally creating a piece of software called AlexNet, and its job was to
recognize images.
And it recognized images at a level, computer vision, which is fundamental to
intelligence.
If you can't perceive, it's hard to have intelligence.
And so, computer vision is a fundamental pillar of, not the only, but
fundamental pillar of.
And so, breaking computer vision, or breaking through in computer vision, is
pretty foundational to almost everything that everybody wants to do in AI.
And so, in 2012, their lab in Toronto made this breakthrough called AlexNet.
And AlexNet was able to recognize images so much better than any human-created
computer vision algorithm in the 30 years prior.
So, all of these people, all of these scientists, and we had many, too, working
on computer vision algorithms.
And these two kids, Ilya and Alex, under Jeff Hinton, took a giant leap above
it.
And it was based on this thing called AlexNet, this neural network.
And the way it ran, the way they made it work, was literally buying two NVIDIA
graphics cards.
Because NVIDIA's GPUs, we've been working on this new way of doing computing.
And our GPUs application, and it's basically a supercomputing application back
in 1984, in order to process computer games and what you have in your racing
simulator.
That is called an image generator supercomputer.
And so, NVIDIA started, our first application was computer graphics.
And we applied this new way of doing computing, where we do things in parallel
instead of sequentially.
A CPU does things sequentially.
Step one, step two, step three.
In our case, we break the problem down, and we give it to thousands of
processors.
And so, our way of doing computation is much more complicated.
But if you're able to formulate the problem in the way that we create it called
CUDA, this is the invention of our company.
If you could formulate it in that way, we could process everything
simultaneously.
Now, in the case of computer graphics, it's easier to do because every single
pixel on your screen is not related to every other pixel.
And so, I could render multiple parts of the screen at the same time.
Not completely true, because, you know, maybe the way lighting works or the way
shadow works, there's a lot of dependency and such.
But, computer graphics, with all the pixels, I should be able to process
everything simultaneously.
And so, we took this embarrassingly parallel problem called computer graphics,
and we applied it to this new way of doing computing.
NVIDIA's accelerated computing.
We put it in all of our graphics cards.
Kids were buying it to play games.
You probably don't know this, but we're the largest gaming platform in the
world today.
Oh, I know that.
Oh, okay.
I used to make my own computers.
I used to buy your graphics cards.
Oh, that's super cool.
Yeah.
Okay.
Set up SLI with two graphics cards.
Oh, yeah.
I love it.
Okay.
That's super cool.
Oh, yeah, man.
I used to be a Quake junkie.
Oh, that's cool.
Yeah.
Okay.
So, SLI, I'll tell you the story in just a second, and how it led to Elon.
I'm still answering the question.
And so, anyways, these two kids trained this model using the technique I
described earlier on our GPUs, because our GPUs could process things in
parallel.
It's essentially a supercomputer in a PC.
The reason why you used it for Quake is because it is the first consumer
supercomputer.
Okay?
And so, anyways, they made that breakthrough.
We were working on computer vision at the time.
It caught my attention.
And so, we went to learn about it.
Simultaneously, this deep learning phenomenon was happening all over the
country.
Universities after another recognized the importance of deep learning, and all
of this work was happening at Stanford, at Harvard, at Berkeley, just all over
the place.
New York University, you know, Yang LeCun, Andrew Yang at Stanford, so many
different places.
And I see it cropping up everywhere.
And so, my curiosity asked, you know, what is so special about this form of
machine learning?
And we've known about machine learning for a very long time.
We've known about AI for a very long time.
We've known about neural networks for a very long time.
And so, we realized that this architecture for deep neural networks, back
propagation, the way deep neural networks were created, we could probably scale
this problem, scale the solution to solve many problems.
That is essentially a universal function approximator, okay?
Meaning, you know, back when you were in school, you have a box.
Inside of it is a function.
You give it an input.
It gives you an output.
And the reason why I call it a universal function approximator is that this
computer, instead of you describing the function, a function could be a Newton's
equation, f equals ma.
That's a function.
You write the function in software.
You give it input, f, mass, acceleration.
It'll tell you the force, okay?
And the way this computer works is really interesting.
You give it a universal function.
It's not f equals ma.
It's just a universal function.
It's a big, huge, deep neural network.
And instead of describing the inside, you give it examples of input and output,
and it figures out the inside.
So you give it input and output, and it figures out the inside.
A universal function approximator.
Today, it could be Newton's equation.
Tomorrow, it could be Maxwell's equation.
It could be Coulomb's law.
It could be thermodynamics equation.
It could be, you know, Schrodinger's equation for quantum physics.
And so you could put any, you could have this describe almost anything, so long
as you have the input and the output.
So long as you have the input and the output.
Or it could learn the input and output.
And so we took a step back, and we said, hang on a second.
This isn't just for computer vision.
Deep learning could solve any problem.
All the problems that are interesting, so long as we have input and output.
Now, what has input and output?
Well, the world.
The world has input and output.
And so we could have a computer that could learn almost anything.
Machine learning, artificial intelligence.
And so we reasoned that maybe this is the fundamental breakthrough that we
needed.
There were a couple of things that had to be solved.
For example, we had to believe that you could actually scale this up to giant
systems.
It was running in a, they had two graphics cards, two GTX 580s.
Which, by the way, is exactly your SLI configuration.
Yeah.
Okay.
So that GTX 580 SLI was the revolutionary computer that put deep learning on
the map.
Wow.
It was 2018.
And you were using it to play quick.
Wow.
That's crazy.
That was the moment.
That was the big bang of modern AI.
We were lucky because we were inventing this technology, this computing
approach.
We were lucky that they found it.
Turns out they were gamers and it was lucky they found it.
And it was lucky that we paid attention to that moment.
It was a little bit like, you know, that Star Trek, you know, first contact.
The Vulcans had to have seen the warp drive at that very moment.
If they didn't witness the warp drive, you know, they would have never come to
Earth.
And everything would have never happened.
It's a little bit like if I hadn't paid attention to that moment, that flash,
and that flash didn't last long.
If I hadn't paid attention to that flash or our company didn't pay attention to
it, who knows what would have happened.
But we saw that and we reasoned our way into this is a universal function
approximator.
This is not just a computer vision approximator.
We could use this for all kinds of things if we could solve two problems.
The first problem is that we have to prove to ourselves it could scale.
The second problem we had to wait for, I guess, contribute to and wait for, is
the world will never have enough data on input and output where we could supervise
the AI to learn everything.
For example, if we have to supervise our children on everything they learn, the
amount of information they could learn is limited.
We needed the AI, we needed the computer to have a method of learning without
supervision.
And that's where we had to wait a few more years.
But unsupervised AI learning is now here.
And so the AI could learn by itself.
And the reason why the AI could learn by itself is because we have many
examples of right answers.
Like, for example, if I want to learn, if I want to teach an AI how to predict
the next word, I could just grab it, grab a whole bunch of text that we already
have, mask out the last word, and make it try and try and try again until it
predicts the next one.
Or I mask out random words inside the text, and I make it try and try and try
until it predicts it.
You know, like, Mary goes down to the bank.
Is it a river bank or a money bank?
Well, if you're going to go down to the bank, it's probably a river bank.
And it might not be obvious even from that.
It might need, and caught a fish.
Okay, now you know it must be the river bank.
And so you give these AIs a whole bunch of these examples, and you mask out the
words, it'll predict the next one.
Okay?
And so unsupervised learning came along.
These two ideas, the fact that it's scalable and unsupervised learning came
along, we were convinced that we had to put everything into this and help
create this industry because we're going to solve a whole bunch of interesting
problems.
And that was in 2012.
By 2016, I had built this computer called the DGX-1.
The one that you saw me give to Elon is called DGX-Spark.
The DGX-1 was $300,000.
It cost NVIDIA a few billion dollars to make, the first one.
And instead of two chips SLI, we connected eight chips with a technology called
NVLink.
But it's basically SLI supercharged.
Okay?
Okay.
And so we connected eight of these chips together instead of just two.
And all of them worked together, just like your Quake rig did, to solve this
deep learning problem, to train this model.
And so we created this thing.
I announced it at GTC at one of our annual events.
And I described this deep learning thing, computer vision thing, and this
computer called DGX-1.
The audience was, like, completely silent.
They had no idea what I was talking about.
And I was lucky because I had known Elon, and I helped him build the first
computer for Model 3, the Model S.
And when he wanted to start working on autonomous vehicle, I helped him build
the computer that went into the Model S AV system, his full self-driving system.
We were basically the FSD computer version one.
And so we were already working together.
And when I announced this thing, nobody in the world wanted it.
I had no purchase orders, not one.
Nobody wanted to buy it.
Nobody wanted to be part of it, except for Elon.
He goes, he was at the event, and we were doing a fireside chat about the
future of self-driving cars.
I think it was, like, 2016.
At that time, it was 2015.
And he goes, you know what?
I have a company that could really use this.
And I said, wow, my first customer.
And so I was pretty excited about it.
And he goes, yeah, we have this company.
It's a nonprofit company.
And all the blood drained out of my face.
Yeah.
I just spent a few billion dollars building this thing.
It cost $300,000.
And, you know, the chances of a nonprofit being able to pay for this thing is
approximately zero.
And he goes, you know, this is an AI company.
And it's a nonprofit, and we could really use one of these supercomputers.
And so I picked it up.
I built the first one for ourselves.
We're using it inside the company.
I boxed one up.
I drove it up to San Francisco, and I delivered it at Elon in 2016.
A bunch of researchers were there.
Peter Beal was there.
Ilia was there.
There was a bunch of people there.
And I walked up to the second floor where they were all kind of in a room that's
smaller than your place here.
And that place turned out to have been OpenAI.
2016.
Wow.
Just a bunch of people sitting in a room.
It's not really nonprofit anymore, though, is it?
They're not nonprofit anymore.
Weird how that works.
Yeah, yeah.
But anyhow, Elon was there.
Yeah.
It was really a great, great moment.
Oh, yeah.
There you go.
Yeah, that's it.
Look at you, bro.
Same jacket.
Look at that.
I haven't aged.
Not a lick of black hair, though.
The size of it is significantly smaller.
That was the other day.
Okay, so.
Oh, yeah.
There you go.
Yeah.
Look at the difference.
That's crazy.
Exactly the same industrial design.
He's holding it in his hand.
Here's the amazing thing.
DGX1 was one petaflops, okay?
That's a lot of flops.
And DGX Spark is one petaflops.
Nine years later.
Wow.
The same amount of computing horsepower.
In a much smaller.
Shrunken down.
Yeah.
And instead of $300,000, it's now $4,000.
And it's the size of a small book.
Incredible.
Crazy.
That's how technology moves.
Anyways, that's the reason why I wanted to give him the first one.
It's so-
Because I gave him the first one in 2016.
It's so fascinating.
I mean, if you wanted to make a story for a film, I mean, that would be the
story that,
like, what better scenario, if it really does become a digital life form, how
funny would
it be that it is birthed out of the desire for computer graphics for video
games?
Exactly.
Isn't it kind of crazy?
It's kind of crazy.
Yeah.
Kind of crazy when you think about it that way, because it's a perfect origin
story.
Computer graphics was one of the hardest supercomputer problems, generating
reality.
And also one of the most profitable to solve, because computer games are so
popular.
When NVIDIA started in 1993, we were trying to create this new computing
approach.
The question is, what's the killer app?
And the company wanted to create a new type of computing architecture, a new
type of computer
that can solve problems that normal computers can't solve.
Well, the applications that existed in the industry in 1993 are applications
that normal computers
can solve.
Because if the normal computers can't solve them, why would the application
exist?
And so we had a mission statement for a company that has no chance of success.
But I didn't know that in 1993.
It just sounded like a good idea.
Right.
And so if we created this thing that can solve problems, you know, it's like,
you actually
have to go create the problem.
And so that's what we did in 1993, there was no Quake.
John Carmack hadn't even released Doom yet.
You probably remember that.
Sure.
Yeah.
And there were no applications for it.
And so I went to Japan because the arcade industry had this, at the time of
Sega, if you remember?
Sure.
The arcade machines, they came out with 3D arcade systems.
Virtual fighter, Daytona, virtual cop, all of those arcade games were in 3D for
the very first time.
And the technology they were using was from Martin Marietta, the flight simulators,
they took the guts out of a flight simulator and put it into an arcade machine.
The system that you have over here, it's got to be a million times more
powerful than that arcade machine.
And that was a flight simulator for NASA.
Whoa.
And so they took the guts out of that.
They were using it for flight simulation for jets and, you know, space shuttle.
And they took the guts out of that.
And Sega had this brilliant computer developer.
His name was Yu Suzuki.
Yu Suzuki and Miyamoto, Sega and Nintendo, these were the, you know, the
incredible pioneers, the visionaries, the incredible artists.
And they're both very, very technical.
They were the origins, really, of the gaming industry.
And Yu Suzuki pioneered 3D graphics gaming.
And so I went, we created this company and there were no apps.
And we were spending all of our afternoons, you know, we told our family we
were going to work, but it was just the three of us, you know, who's going to
know.
And so we went to Curtis's, one of the founders, went to Curtis's townhouse.
And Chris and I were married.
We have kids.
I already had Spencer and Madison.
They were probably two years old.
And Chris's kids are about the same age as ours.
And we would go to work in this townhouse.
But, you know, when you're a startup and the mission statement is the way we
described, you're not going to have too many customers calling you.
And so we had really nothing to do.
And so after lunch, we would always have a great lunch.
After lunch, we would go to the arcades and play the Sega, you know, the Sega
Virtua Fighter and Daytona and all those games.
And analyze how they're doing it, trying to figure out how they were doing that.
And so we decided, let's just go to Japan and let's convince Sega to move those
applications into the PC.
And we would start the PC gaming, the 3D gaming industry, partnering with Sega.
That's how NVIDIA started.
Wow.
And so in exchange for them developing their games for our computers in the PC,
we would build a chip for their game console.
That was the partnership.
I build a chip for your game console.
You port the Sega games to us.
And then they paid us, you know, at the time, quite a significant amount of
money to build that game console.
And that was kind of the beginning of NVIDIA getting started.
And we thought we were on our way.
And so I started with a business plan, a mission statement that wasn't possible.
We lucked into the Sega partnership.
We started taking off, started building our game console.
And about a couple years into it, we discovered our first technology didn't
work.
It was, it would have been a flaw.
It was a flaw.
And all of the technology ideas that we had, the architecture concepts were
sound.
But the way we were doing computer graphics was exactly backwards.
You know, instead of, I won't bore you with the technology, but instead of
inverse texture mapping, we were doing forward texture mapping.
Instead of triangles, we did curved surfaces.
So other people did it flat.
We did it round.
Other technology, the technology that ultimately won, the technology we use
today, has Z-buffers.
It automatically sorted.
We had an architecture with no Z-buffers.
The application had to sort it.
And so we chose a bunch of technology approaches that three major technology
choices, all three choices were wrong.
Okay.
So this is how incredibly smart we were.
And so in 1995, mid-95, we realized we were going down the wrong path.
Meanwhile, the Silicon Valley was packed with 3D graphics startups because it
was the most exciting technology of that time.
And so 3D effects and rendition and silicon graphics was coming in.
Intel was already in there.
And, you know, gosh, what added up eventually to a hundred different startups
we had to compete against.
Everybody had chosen the right technology approach, and we chose the wrong one.
And so we were the first company to start.
We found ourselves essentially dead last with the wrong answer.
And so the company was in trouble.
And ultimately, we had to make several decisions.
The first decision is, well, if we change now, we will be the last company.
Even if we changed into the technology that we believe to be right, we'd still
be dead.
And so that argument, you know, do we change and therefore be dead?
Don't change and make this technology work somehow or go do something
completely different.
That question stirred the company strategically and was a hard question.
I eventually, you know, advocated for, we don't know what the right strategy is,
but we know what the wrong technology is.
So let's stop doing it the wrong way and let's give ourselves a chance to go
figure out what the strategy is.
The second thing, the second problem we had was our company was running out of
money.
And I had, I was in a contract with Sega and I owed them this game console.
And if that contract would have been canceled, we'd be dead.
We would have vaporized instantly.
And so, so I, uh, uh, I went to Japan and I explained to, uh, the CEO of Sega,
Iri Madri, really great man.
He was the former CEO of Honda USA, went back to Sega to run Sega, went back to
Japan to run Sega.
And I explained to him that I was, uh, I guess I was what, 30, 33 years old.
You know, when I was 33 years old, I still had acne and I got this, this, you
know, Chinese kid that was super skinny.
And he, he was already kind of elder.
And, uh, I went to him and I said, I said, listen, I've got some bad news for
you.
And, and first, the technology that we promised you doesn't work.
And second, we shouldn't finish your contract because we'd waste all your money
and you would have something that doesn't work.
And I recommend you'd find another partner to build your game console.
Whoa.
And so I'm terribly sorry that we've set you back in your product roadmap.
And third, even though you're going to, I'm asking you to let me out of the
contract, I still need the money.
Because if you didn't give me the money, we'd vaporize overnight.
And so I explained it to him humbly, honestly, I gave him the background,
explained to him why the technology doesn't work.
Why we thought it was going to work, why it doesn't work.
And, um, and I asked him to, uh, convert the last $5 million that they were
going to complete the contract.
To give us that money as an investment instead.
And he said, but it's very likely your company will go out of business, even
with my investment.
And it was completely true.
Back then, 1995, $5 million was a lot of money.
It's a lot of money today.
$5 million was a lot of money.
And here's a pile of competitors doing it right.
What are the chances that giving NVIDIA $5 million, that we would develop the
right strategy, that he would get a return on that $5 million or even get it
back?
Zero percent.
You do the math, it's zero percent.
If I were sitting there right there, I wouldn't have done it.
$5 million was a mountain of money to Sega at the time.
And so I told him that, that, that, um, uh, if you invested that $5 million in
us, it is most likely to be lost.
But if you didn't invest that money, we'd be out of business and we would have
no chance.
And I, I told him that I, I don't even know exactly what I said in the end, but
I told him that I would understand if he decided not to, but it would make the
world to me if he did.
He went off and thought about it for a couple of days and came back and said,
we'll do it.
Wow.
Did you have a strategy to how to correct what it was doing wrong?
Did you explain that to him?
Oh man, wait until I tell you the rest of it's, it's scarier, even scarier.
Oh no.
And so, so, um, so what he, what he decided was, was, uh, uh, Jensen was a
young man he liked.
That's it.
Wow.
To this day.
That's nuts.
I was.
Boy, do you owe, but the world owes that guy.
No doubt.
Right.
Like what, he, he, he, he celebrated today in Japan.
And if he would have kept that five, the, the investment, I think it'd be worth
probably about a trillion dollars today.
I know.
But the moment we went public, they sold it.
They go, wow, that's a miracle.
So, they sold it, yeah, they sold it at NVIDIA valuation about 300 million.
That's our IPO valuation, 300 million.
Wow.
And so, so anyhow, I was incredibly grateful.
Um, and then now we had to figure out what to do because we still were doing
the wrong strategy, wrong technology.
So, unfortunately, we had to lay off most of the company.
We shrunk the company all back.
All the people working on the game console, you know, we had to shrunk it all
back, shrink it all back.
And, um, and then, and then somebody told me that, but Jensen, we've never
built it this way before.
We've never built it the right way before.
We've only known how to build it the wrong way.
And so, nobody in the company knew how to build this supercomputing image
generator, 3D graphics thing that Silicon Graphics did.
And so, so, uh, I said, okay, how hard can it be?
You got all these 30 companies, you know, 50 companies doing it.
How hard can it be?
And so, luckily, there was a textbook written by the company, Silicon Graphics.
And so, I went down to the store, I had 200 bucks in my pocket, and I bought
three textbooks, only three they had, $60 a piece.
I bought the three textbooks.
I brought it back and I gave one to each one of the architects, and I said,
read that and let's go save the company.
And so, so they, they, they read this textbook, learned from the giant at the
time, Silicon Graphics, about how to do,
3D Graphics.
But the thing that was amazing, and what makes NVIDIA special today, is that
the people that are there are able to start from first principles.
Learn best known art, but re-implement it in a way that's never been done
before.
And so, when we re-imagined the technology of 3D Graphics, we re-imagined it in
a way that manifests today, the modern 3D Graphics.
We really invented modern 3D Graphics.
But we learned from previous known arts, and we implemented fundamentally
differently.
What did you do that changed it?
Well, you know, ultimately, ultimately, the, the simple, the simple answer is
that the way Silicon Graphics works, the geometry engine is a bunch of software
running on processors.
We took that and eliminated all the generality, the general purposeness of it,
and we reduced it down into the most essential part of 3D Graphics.
And we hard-coded it into the chip.
And so, instead of something general purpose, we hard-coded it very
specifically into just the limited applications, limited functionality
necessary for video games.
And that capability, that super, and because we reinvented a whole bunch of
stuff, it supercharged the capability of that one little chip.
And our one little chip was generating images as fast as a $1 million image
generator.
That was the big breakthrough.
We took a million-dollar thing, and we put it into the graphics card that you
now put into your gaming PC.
And that was our big invention.
And then, and of course, the question is, is, how do you compete against these
30 other companies doing what they were doing?
And, and there we did, we did several things.
One, instead of building a 3D graphics chip for every 3D graphics application,
we decided to build a 3D graphics chip for one application.
We bet the farm on video games.
The needs of video games are very different than the needs for CAD, needs for
flight simulators.
They're related, but not the same.
And so, we narrowly focused our problem statement so I could reject all of the
other complexities.
And we shrunk it down into this one little focus, and then we supercharged it
for gamers.
And the second thing that we did was we created a whole ecosystem of working
with game developers and getting their games ported and adapted to our silicon
so that we could get, turn essentially what is a technology business into a
platform business, into a game platform business.
So, you know, GeForce is really, you know, GeForce is really, you know, GeForce
is really, you know, GeForce is really the game console inside your PC.
It's, you know, it runs Windows, it runs Excel, it runs PowerPoint, of course,
those are easy things.
But its fundamental purpose was simply to turn your PC into a game console.
So, we were the first technology company to build all of this incredible
technology in service of one audience, gamers.
Now, of course, in 1993, the gaming industry didn't exist.
But by the time that John Carmack came along, and the Doom phenomenon happened,
and then Quake came out, as you know, that entire community, boom, took off.
Do you know where the name Doom came from?
It came from this, there's a scene in the movie, The Color of Money, where Tom
Cruise, who's this elite pool player, shows up at this pool hall, and this
local hustler says, what do you got in the case?
And he opens up this case, he has a special pool cue, he goes in here, and he
opens it up, he goes, Doom.
And that's where it came from.
Is that right?
Yeah, because Carmack said that's what they wanted to do to the gaming industry.
Doom.
That when Doom came out, it would just be, everybody would be like, oh, we're
fucked.
Oh, wow.
This is Doom.
That's awesome.
Isn't that amazing?
That's amazing, yeah.
Because it's the perfect name for the game.
Yeah.
And the name came out of that scene in that movie.
That's right.
Well, and then, of course, Tim Sweeney and Epic Games and the 3D gaming genre
took off.
Yes.
And so, if you just kind of, in the beginning was no gaming industry, we had no
choice but to focus the company on one thing, that one thing.
It's a really incredible origin story.
Oh, it's amazing.
It must be like, look back.
Started with a disaster.
That $5 million, that pivot with that conversation with that gentleman, if he
did not agree to that, if he did not like you, what would the world look like
today?
That's crazy.
Wait, then our entire life hung on another gentleman.
And so, now, here we are, we built, so before GeForce, it was Reva 128.
Reva 128 saved the company.
It revolutionized computer graphics.
The performance, cost performance ratio of 3D graphics for gaming was off the
charts amazing.
And we're getting ready to ship it.
Get what?
Well, we're building it.
But we're, so, as you know, $5 million doesn't last long.
And so, every single month, every single month, we were drawing down.
You have to build it, prototype it.
You have to design it, prototype it.
Get the silicon back, which costs a lot of money.
Test it with software.
Because without the software testing the chip, you don't know the chip works.
And then you're going to find a bug, probably.
Because every time you test something, you find bugs.
Which means you have to tape it out again.
Which is more time, more money.
And so, we did the math.
There was no chance somebody was going to survive it.
We didn't have that much time to tape out a chip, send it to a foundry, TSMC.
Get the silicon back, test it, send it back out again.
There was no shot, no hope.
And so, the math, the spreadsheet, doesn't allow us to do that.
And so, I heard about this company.
And this company built this machine.
And this machine is an emulator.
You could take your design, all of the software that describes the chip.
And you could put it into this machine.
And this machine will pretend it's our chip.
So, I don't have to send it to the fab, wait until the fab sends it back, test.
I could have this machine pretend it's our chip.
And I could put all of the software on top of this machine, called an emulator,
and test all of the software on this pretend chip.
And I could fix it all before I send it to the fab.
Whoa.
And if I could do that, when I send it to the fab, it should work.
Nobody knows, but it should work.
And so, we came to the conclusion that let's take half of the money we had left
in the bank.
At the time, it was about a million dollars.
Take half of that money and go buy this machine.
So, instead of keeping the money to stay alive, I took half of the money to go
buy this machine.
Well, I called this guy up.
The company's called Icos.
Called this company up and I said, hey, listen, I heard about this machine.
I like to buy one.
And they go, oh, that's terrific, but we're out of business.
I said, what?
You're out of business?
He goes, yeah, we had no customers.
I said, wait, hang on a second.
So, you never made the machine?
They said, no, no, no, we made the machine.
We have one in inventory if you want it, but we're out of business.
So, I bought one out of inventory.
Okay.
After I bought it, they went out of business.
Wow.
I bought it out of inventory.
And on this machine, we put NVIDIA's chip into it and we tested all of the
software on top.
And at this point, we were on fumes.
But we convinced ourselves that chip is going to be great.
And so, I had to call some other gentleman.
So, I called TSMC.
And I told TSMC, I thought, listen, TSMC is the world's largest founder today.
At the time, they were just a few hundred million dollars large.
Tiny little company.
Tiny little company.
And I explained to them what we were doing.
And I explained to them, I told them I had a lot of customers.
I had one.
You know, Diamond Multimedia.
Probably one of the companies you bought the graphics card from back in the old
days.
And I said, you know, we have a lot of customers and the demand's really great.
And we're going to tape out a chip to you.
And I like to go directly to production.
Because I know it works.
And they said, nobody has ever done that before.
Nobody has ever taped out a chip that worked the first time.
And nobody starts out production without looking at it.
But I knew that if I didn't start the production, I'd be out of business
anyways.
And if I could start the production, I might have a chance.
And so, TSMC decided to support me.
And this gentleman is named Morris Chang.
Morris Chang is the father of the foundry industry.
The founder of TSMC.
Really great man.
He decided to support our company.
I explained to them everything.
He decided to support us.
Frankly, probably because they didn't have that many other customers anyhow.
But they were grateful.
And I was immensely grateful.
And as we were starting the production,
Morris flew to the United States.
And he didn't so many words ask me so.
But he asked me a whole lot of questions that was trying to tease out.
Do I have any money?
But he didn't directly ask me that, you know.
And so, the truth is that we didn't have all the money.
But we had a strong P.O. from the customer.
And if it didn't work, some wafers would have been lost.
And, you know, I'm not exactly sure what would have happened, but we would have
come short.
It would have been rough.
But they supported us with all of that risk involved.
We launched this chip.
Turns out to have been completely revolutionary.
Knocked the ball out of the park.
We became the fastest growing technology company in history to go from zero to
$1 billion.
It's so wild that you didn't test the chip.
I know.
We tested afterwards, yeah.
We tested afterwards.
Afterwards, but he went into production already.
But by the way, by the way, that methodology that we developed to save the
company is used throughout the world today.
That's amazing.
Yeah.
We changed the whole world's methodology of designing chips, the whole world's
rhythm of designing chips.
We changed everything.
How well did you sleep those days?
It must have been so much stress.
You know, what is that feeling where the world just kind of feels like it's
flying?
You have this, what do you call that feeling?
You can't stop the feeling that everything's moving super fast.
And, you know, you're laying in bed and the world just feels like, you know,
and you feel deeply anxious, completely out of control.
I've felt that probably a couple of times in my life.
It's during that time.
Wow.
Yeah.
It was incredible.
What an incredible success.
But I learned a lot.
I learned about, I learned simple things.
I learned how to develop strategies.
I learned how to, you know, our company learned how to develop strategies.
What are winning strategies?
We learned how to create a market.
We created the modern 3D gaming market.
We learned how, and so that exact same skill is how we create the modern AI
market.
It's exactly the same, yeah, it's exactly the same skill, exactly the same
blueprint.
And we learned how to deal with crisis, how to stay calm, how to think through
things systematically.
We learned how to remove all waste in the company and work from first
principles and doing only the things that are essential.
Everything else is waste because we have no money for it.
To live on fumes at all times.
And the feeling, no different than the feeling I had this morning when I woke
up, that you're going to be out of business soon.
That, you know, the phrase 30 days from going out of business, I've used for 33
years.
You still feel that?
Oh, yeah.
Oh, yeah.
Really?
Every morning.
Every morning.
But you guys are one of the biggest companies on planet Earth.
But the feeling doesn't change.
Wow.
The sense of vulnerability, the sense of uncertainty, the sense of insecurity,
it doesn't leave you.
That's crazy.
We were, you know, we had nothing.
We had nothing.
We were dealing with giants.
And you still feel that?
Oh, yeah.
Oh, yeah.
Every day.
Every moment.
Do you think that fuels you?
Is that part of the reason why the company is so successful, that you have that
hungry mentality?
That you never rest, you're never sitting on your laurels, you're always on the
edge?
I have a greater drive from not wanting to fail than the drive of wanting to
succeed.
Isn't that like success coaches would tell you that's completely the wrong
psychology?
The world has just heard me say that out loud for the first time.
But it's true.
Well, that's so fascinating.
The fear of failure drives me more than the greed or whatever it is.
Well, ultimately, that's probably a more healthy approach now that I'm thinking
about it.
I'm not ambitious, for example.
I just want to stay alive, Joe.
I want the company to thrive, you know?
I want us to make an impact.
That's interesting.
Yeah.
Well, maybe that's why you're so humble.
Maybe that's what keeps you grounded, you know?
Because with the kind of spectacular success the company's achieved, it would
be easy to get a big head.
No.
Right?
But isn't that interesting?
It's like if you were the guy that your main focus is just success, you
probably would go, well, made it, nailed it, I'm the man.
Drop the mic.
Instead, you wake up, you're like, God, we can't fuck this up.
No, exactly.
Every morning.
Every morning.
No, every moment.
That's crazy.
Before I go to bed.
Well, listen, if I was a major investor in your company, that's who I'd want
running it.
I'd want a guy who's terrified of-
Yeah.
That's why I work seven days a week every moment I'm awake.
You work every moment you're awake?
Every moment I'm awake.
Wow.
I'm thinking about solving a problem.
I'm thinking about-
How long can you keep this up?
I don't know, but it could be next week.
Sounds exhausting.
It is exhausting.
It sounds completely exhausting.
Always in a state of anxiety.
Wow.
Always in a state of anxiety.
Well, kudos to you for admitting that.
I think that's important for a lot of people to hear because there's probably
some young
people out there that are in a similar position to where you were when you were
starting out
that just feel like, oh, those people that have made it, they're just smarter
than me
and they had more opportunities than me and it's just like it was handed to
them
or they're just in the right place at the right time.
Joe, I just described to you somebody who didn't know what was going on,
actually did it wrong.
Yeah.
Yeah.
And the ultimate diving catch like two or three times.
Crazy.
Yeah.
The ultimate diving catch is the perfect way to put it.
Yeah.
It's just like the edge of your glove.
It probably bounced off of somebody's helmet and landed at the edge.
God, that's incredible.
It's incredible, but it's also, it's really cool that you have this perspective,
that you
look at it that way.
Because, you know, a lot of people that have delusions of grandeur, they have,
you know,
they're inflated.
And their rewriting of history oftentimes had them somehow extraordinarily
smart and they
were geniuses and they knew all along and they were spot on.
The business plan was exactly what they thought.
Yeah.
They destroyed the competition and, you know, and they emerged victorious.
Meanwhile, you're like, I'm scared every day.
Exactly.
Exactly.
It's so funny.
Oh my God.
That's amazing.
It's so true though.
It's amazing.
It's so true.
It's amazing.
Well, but I think there's nothing inconsistent with being a leader and being
vulnerable.
You know, the company doesn't need me to be a genius right all along, right all
the time.
Absolutely certain about what I'm trying to do and what I'm doing.
The company doesn't need that.
The company wants me to succeed.
You know, the thing that, and we started out today talking about President
Trump and I was
about to say something.
And listen, he is my president.
He is our president.
We should all, and we're talking about just because it's President Trump, we
all want him
to be wrong.
I think the United States, we all have to realize he is our president.
We want him to succeed because-
No matter who's president, we should have that attitude.
That's right.
Yeah.
We want him to succeed.
We need to help him succeed because it helps everybody, all of us succeed.
And I'm lucky that I work in a company where I have 40,000 people who wants me
to succeed.
They want me to succeed, and I can tell.
And they're all, every single day, to help me overcome these challenges, trying
to realize
what I describe to be our strategy, doing their best.
And if it's somehow wrong or not perfectly right, to tell me so that we could
pivot.
And the more vulnerable we are as a leader, the more able other people are able
to tell
you, you know, that, Jensen, that's not exactly right.
Or have you considered this information?
And the more vulnerable we are, the more able we're actually able to pivot.
If we put ourselves into this superhuman capability, then it's hard for us to
pivot strategy because
we were supposed to be right all along.
And so if you're always right, how can you possibly pivot?
Because pivoting requires you to be wrong.
And so I've got no trouble with being wrong.
I just have to make sure that I stay alert, that I reason about things from
first principles
all the time, always break things down to first principles, understand why it's
happening.
Reassess continuously.
The reassessing continuously is kind of partly what causes continuous anxiety,
you know, because
you're asking yourself, were you wrong yesterday?
Are you still right?
Is this the same?
Has that changed?
Has that conditioned?
Is that worse than you thought?
But God, that mindset is perfect for your business, though, because this
business is ever
changing.
All the time.
I've got competition coming from every direction.
So much of it is kind of up in the air.
And you have to invent a future where a hundred variables are included, and
there's no way
you could be right on all of them.
And so you have to be, you have to surf.
Wow.
You have to surf.
That's a good way to put it.
You have to surf.
Yeah.
You're surfing waves of technology and innovation.
That's right.
You can't predict the waves.
You got to deal with the ones you have.
Wow.
And, but skill matters.
And I've been doing this for 30, I'm the longest running tech CEO in the world.
Is that true?
Congratulations.
That's amazing.
And, you know, people ask me how, just one, don't get fired.
That'll stop and shorten a heartbeat.
And then two, don't get bored.
Yeah.
Well, how do you maintain your enthusiasm?
The honest truth is, it's not always enthusiasm.
It's, you know, sometimes it's enthusiasm.
Sometimes it's just good old fashioned fear.
And then sometimes, you know, a healthy dose of frustration.
You know, it's.
Whatever keeps you moving.
Yeah.
Just all the emotions.
I think, you know, CEOs, we have all the emotions, right?
You know?
And so probably, probably jacked up to the maximum because you're, you're kind
of feeling it on behalf of the whole company.
I'm feeling it on behalf of everybody at the same time.
And it kind of, you know, encapsulates into, into somebody.
And so I have to be mindful of the past.
I have to be mindful of the present.
I've got to be mindful of the future.
And, you know, it can't, it's not without emotion.
It's not just, it's, it's not just a job.
Let's just put it that way.
It doesn't seem like it at all.
I would imagine one of the more difficult aspects of your job currently, now
that the company is massively successful, is anticipating where technology is
headed and where the applications are going to be.
So how do you try to map that out?
Yeah, there, there, there, there's a whole bunch of ways.
And, and it takes, it takes, it takes a whole bunch of things.
But let me just start.
You have to be surrounded by amazing people.
And NVIDIA is now, you know, if you look at, look at, look at the large tech
companies in the world today, most of them have a business in advertising or
social media or, you know, content distribution.
And at the core of it is really fundamental computer science.
And so the company's business is not computers.
The company's business is not technology.
Technology drives the company.
NVIDIA is the only company in the world that's large whose only business is
technology.
We only build technology.
We don't advertise.
The only way that we make money is to create amazing technology and sell it.
And so to be that, to be NVIDIA today, you're, the number one thing is you're
surrounded by the finest computer scientists in the world.
And that's my gift.
My gift is that we've created a company's culture, a condition by which the
world's greatest computer scientists want to be part of it because they get to
do their life's work and create the next thing because that's what they want to
do.
Because maybe they're not, they don't want to be in service of another business.
They want to be in service of the technology itself.
And we're the largest form of its kind in the history of the world.
I know.
It's pretty amazing.
Wow.
And so, one, you know, we have got a great condition.
We have a great culture.
We have great people.
And now the question is, how do you systematically be able to see the future,
stay alert of it, and reduce the likelihood of missing something or being wrong?
And so, there's a lot of different ways you could do that.
For example, we have great partnerships.
We have fundamental research.
We have a great research lab, one of the largest industrial research labs in
the world today.
And we partner with a whole bunch of universities and other scientists.
We do a lot of open collaboration.
And so, I'm constantly working with researchers outside the company.
We have the benefit of having amazing customers.
And so, I have the benefit of working with Elon and, you know, and others in
the industry.
And we have the benefit of being the only pure play technology company that can
serve consumer internet, industrial manufacturing, scientific computing,
healthcare, financial services.
All the industries that we're in, they're all signals to me.
And so, they all have mathematicians and scientists.
And so, because I have the benefit now of a radar system that is the most broad
of any company in the world, working across every single industry, from
agriculture to energy to video games.
And so, the ability for us to have this vantage point, one, doing fundamental
research ourselves, and then, two, working with all the great researchers,
working with all the great industries, the feedback system is incredible.
And then, finally, you just have to have a culture of staying super alert.
There's no easy way of being alert, except for paying attention.
I haven't found a single way of being able to stay alert without paying
attention.
And so, you know, I probably read several thousand emails a day.
How?
How do you have the time for that?
I wake up early.
This morning, I was up at four o'clock.
How much do you sleep?
Six, seven hours.
Yeah.
And then, you're up at four, read emails for a few hours before you get going.
That's right, yeah.
Wow.
Every day?
Every single day.
Not one day missed.
Including Thanksgiving, Christmas.
Do you ever take a vacation?
Yeah, but they're – my definition of a vacation is when I'm with my family.
And so, if I'm with my family, I'm very happy.
I don't care where we are.
And you don't work then, or do you work a little?
No, no, I work a lot.
Even, like, if you go on a trip somewhere, you're still working.
Oh, sure.
Oh, sure.
Wow.
Every day?
Every day.
But my kids work every day.
You make me tired just saying this.
My kids work every day.
Both of my kids work at NVIDIA.
They work every day.
Wow.
Yeah, I'm very lucky.
Wow.
Yeah.
It's brutal now because, you know, it's just me working every day.
Now, we have three people working every day.
And they want to work with me every day.
And so, it's a lot of work.
Well, you've obviously imparted that ethic into them.
They work incredibly hard.
I mean, it's not –
But my parents work incredibly hard.
Yeah.
I was born with the work gene, the suffering gene.
Well, listen, man.
It has paid off.
What a crazy story.
I mean, it's just – it's really an amazing origin story.
It really – I mean, it has to be kind of surreal to be in the position that
you're in now when you look back at how many times that it could have fallen
apart and humble beginnings.
But, you know, this is – it's a great country.
You know, I'm an immigrant.
My parents sent my older brother and I here first.
We're in Thailand.
I was born in Taiwan.
But my dad had a job in Thailand.
He was a chemical and instrumentation engineer, incredible engineer.
And his job was to go start an oil refinery.
And so, we moved to Thailand, lived in Bangkok.
And in 19 – I guess 1973, 1974 timeframe, you know how Thailand, every so
often, they would just have a coup.
You know, the military would have an uprising.
And all of a sudden, one day, there were tanks and soldiers in the streets.
And my parents thought, you know, it probably isn't safe for the kids to be
here.
And so, they contacted my uncle.
My uncle lives in Tacoma, Washington.
And we had never met him.
And my parents sent us to him.
How old were you?
I was about to turn nine.
And my older brother almost turned 11.
And so, the two of us came to the United States.
And we stayed with our uncle for a little bit while he looked for a school for
us.
And my parents didn't have very much money.
And they'd never been to the United States.
My father was – I'll tell you that story in a second.
And so, my uncle found a school that would accept foreign students and
affordable enough for my parents.
And that school turns out to have been in Oneida, Kentucky, Clark County,
Kentucky, the epicenter of the opioid crisis today.
Coal country.
Clark County, Kentucky is – was the poorest county in America when I showed
up.
It is the poorest county in America today.
And so, we went to the school.
It's a great school.
Oneida Baptist Institute.
In a town of a few hundred.
I think it was 600 at the time that we showed up.
No traffic light.
And I think it was 600 today.
It's kind of an amazing feat, actually.
The ability to hold your population for – when it's 600 people, it's quite a
magical thing.
It's quite a magical thing, however they did it.
And so, the school had a mission of being an open school for any children who
would like to come.
And what that basically means is that if you're a troubled student, if you have
a troubled family,
if you're, you know, whatever your background, you're welcome to come to Oneida
Baptist Institute,
including kids from international who would like to stay there.
Did you speak English at the time?
Okay.
Yeah.
Okay.
Yeah.
And so, we showed up and my first thought was, gosh, there are a lot of
cigarette butts on the ground.
100% of the kids smoked.
So, right away, you know, this is not a normal school.
Nine-year-olds?
No.
I was the youngest kid.
Okay.
11-year-olds.
My roommate was 17 years old.
Wow.
Yeah.
He just turned 17.
And he was jacked.
And I don't know where he is now.
I know his name, but I don't know where he is now.
But anyways, that night, we got – and the second thing I noticed when you
walk into your dorm room
is there are no drawers and no closet doors, just like a prison.
And there are no locks so that people could check up on you.
And so, I go into my room, and he's 17, and, you know, get ready for bed.
And he had all this tape, and he had all this tape all over his body, and it
turned out he was in a knife fight, and he's been stabbed all over his body.
And these were just fresh wounds.
Whoa.
And the other kids were hurt much worse.
And so, he was my roommate, the toughest kid in school, and I was the youngest
kid in school.
So, it was a junior high, but they took me anyways because if I walked about a
mile across the Kentucky River, the Swing Bridge,
the other side is a middle school that I could go to, and then I can go to that
school, and I come back, and then I stay in the dorm.
And so, basically, Oneida Baptist Institute was my dorm when I went to this
other school.
My older brother went to the junior high.
And so, we were there for a couple of years.
Every kid had chores.
My older brother's chore was to work in the tobacco farm, you know, so they
raised tobacco so that they could raise some extra money for the school, kind
of like a penitentiary.
Wow.
And my job was just to clean the dorm.
And so, I was nine years old.
I was cleaning toilets for a dorm of 100 boys.
I cleaned more bathrooms than anybody, and I just wished that everybody was a
little bit more careful, you know?
But anyways, I was the youngest kid in school.
My memories of it was really good, but it was a tough town.
Sounds like it.
Yeah, town kids, they all carried, everybody had knives.
Everybody had knives.
Everybody smoked.
Everybody had a Zippo lighter.
I smoked for a week.
Did you?
Oh, yeah, sure.
How old were you?
I was nine, yeah.
When you were nine, you were nine, you tried smoking.
Yeah, I got myself a pack of cigarettes.
Everybody else did.
Did you get sick?
No, I got used to it, you know?
And I learned how to blow smoke rings and, you know, breathe out of my nose,
you know, take it in and out of my nose.
I mean, there was all the different things that you learned.
At nine?
Yeah.
Wow.
You just did it to fit in or it looked cool?
Yeah, because everybody else did it.
Right.
Yeah.
And then I did it for a couple of weeks, I guess.
And I just rather have, I had a quarter, you know, I had a quarter a month or
something like that.
I just rather buy popsicles and fredsicles with it.
I was nine, you know.
Right.
I chose, I chose the better path.
Wow.
That was our school.
And then my parents came to the United States two years later and we met them
in Tacoma, Washington.
That's wild.
It was a really crazy experience.
What a strange, formative experience.
Yeah.
Tough kids.
Thailand to one of the poorest places in America, or if not the poorest, as a
nine-year-old.
Yeah, it was my first experience.
By yourself.
Yeah.
With your brother.
Wow.
Yeah.
Yeah.
No, I used to remember, and what breaks my heart, probably the only thing that
really
breaks my heart about that experience was, so we didn't have enough money to
make, you know,
international phone calls every week.
And so my parents gave us this tape deck, this Iowa tape deck, and a tape.
And so every month we would sit in front of that tape deck, and my older
brother Jeff and I,
the two of us would just tell them what we did the whole month.
Wow.
And we would send that tape by mail.
And my parents would take that tape and record back on top of it and send it
back to us.
Wow.
Could you imagine if for two years, if that tape still existed, of these two
kids just describing
their first experience with the United States?
Like, I remember telling my parents that I joined the swim team.
My roommate was really buff, and so every day we spent a lot of time in the gym.
And so every night, 100 push-ups, 100 sit-ups, every day in the gym.
So I was nine years old.
I was pretty buff.
And I'm pretty fit.
And so I joined the soccer team.
I joined the swim team.
Because if you join the team, they take you to Meads, and then afterwards, you
get to go to a nice restaurant.
And that nice restaurant was McDonald's.
Wow.
And I recorded this thing.
I said, Mom and Dad, we went to the most amazing restaurant today.
This whole place is lit up.
It's like the future.
And the food comes in a box.
And the food is incredible.
The hamburger is incredible.
It was McDonald's.
But anyhow, wouldn't it be amazing?
Oh, my God.
Two years.
You've been recording?
Yeah, two years.
Yeah.
What a crazy connection to your parents, too.
Just sending a tape and them sending you one back.
And it's the only way you're communicating for two years?
Yeah.
Wow.
Yeah.
No, my parents are incredible, actually.
They grew up really poor.
And when they came to the United States, they had almost no money.
Probably one of the most impactful memories I have is they came and we were
staying in an apartment complex.
And they had just rent, I guess people still do, rent a bunch of furniture.
And we were messing around.
And we bumped into the coffee table and crushed it.
It was made out of particle wood and we crushed it.
And I just still remember the look on my mom's face, you know, because they
didn't have any money and she didn't know how she was going to pay it back.
But anyhow, that kind of tells you how hard it was for them to come here.
But they left everything behind and all they had was their suitcase and the
money they had in their pocket.
And they came to the United States.
How old were they at the time?
Pursued the American dream.
They were in their 40s.
Wow.
Yeah, late 30s.
Pursued the American dream.
This is the American dream.
I'm the first generation of the American dream.
Wow.
Yeah.
It's hard not to love this country.
It's hard not to be romantic about this country.
That is a romantic story.
That's an amazing story.
Yeah.
And my dad found his job literally in the newspaper, you know, the ads.
And he calls people, got a job.
What did he do?
He was a consulting engineer in a consulting firm.
And they helped people build oil refineries, paper mills, and fabs.
And that's what he did.
He's really good at factory design, instrumentation engineer.
And so he's brilliant at that.
And so he did that.
And my mom worked as a maid.
And they found a way to raise us.
Wow.
That's an incredible story, Jensen.
It really is.
All of it.
From your childhood to the perils of NVIDIA almost falling.
It's really incredible, man.
It's a great story.
Yeah.
I've lived a great life.
You really have.
And it's a great story for other people to hear, too.
It really is.
You don't have to go to Ivy League schools to succeed.
This country creates opportunities, has opportunities for all of us.
You do have to strive.
You have to claw your way here.
Yeah.
But if you put in the work, you can succeed.
Nobody works hard.
There's a lot of luck and a lot of good decision making.
And the good graces of others.
Yes.
That's really important.
You and I spoke about two people who are very dear to me.
But the list goes on.
The people at NVIDIA who have helped me, many friends that are on the board,
the decisions,
them giving me the opportunity.
Like when we were inventing this new computing approach, I tanked our stock
price because we
added this thing called CUDA to the chip.
We had this big idea.
We added this thing called CUDA to the chip.
But nobody paid for it.
But our cost doubled.
And so we had this graphics chip company.
And we invented GPUs.
We invented programmable shaders.
We invented everything modern computer graphics.
We invented real-time ray tracing.
That's why it went from GTX to RTX.
We invented all this stuff.
But every time we invented something, the market doesn't know how to appreciate
it.
But the cost went way up.
And in the case of CUDA that enabled AI, the cost increased a lot.
But we really believed it.
And so if you believe in that future and you don't do anything about it, you're
going to
regret it for your life.
And so I always tell the team, do we believe this or not?
And if you believe it, and grounded on first principles, not random hearsay,
and we believe it, we owe it to ourselves to go pursue it.
If we're the right people to go do it, if we're the right people to go do it,
if it's really, really hard to do, it's worth doing, and we believe it.
Let's go pursue it.
Well, we pursued it.
We launched the product.
It was exactly like when I launched DGX1, and the entire audience was like
complete silence.
When I launched CUDA, the audience was complete silence.
No customer wanted it.
Nobody asked for it.
Nobody understood it.
NVIDIA was a public company.
What year was this?
This is, let's see, 2006, 20 years ago.
2005.
Wow.
Our stock prices went poof.
I think our valuation went down to like $2 or $3 billion.
From?
From about 12 or something like that.
I crushed it in a very bad way.
What is it now, though?
Yeah, it's higher.
Very humble of you.
It's higher, but it changed the world.
Yeah.
That invention changed the world.
It's an incredible story, Johnson.
It really is.
Thank you.
I like your story.
It's incredible.
My story's not as incredible.
My story's more weird.
You know?
It's much more fortuitous and weird.
Okay.
What are the three milestones that, most important milestones that led to here?
That's a good question.
What was step one?
I think step one was seeing other people do it.
Step one was in the initial days of podcasting, like in 2009 when I started
podcasting and only been around for a couple of years.
The first was Adam Curry.
The first was Adam Curry, my good friend, who was the podfather.
He invented podcasting.
And then, you know, I remember Adam Carolla had a show because he had a radio
show.
His radio show got canceled.
And so he decided to just do the same show, but do it on the internet.
And that was pretty revolutionary.
Nobody was doing that.
And then there was the experience that I had had doing different morning radio
shows, like Opie and Anthony in particular, because it was fun.
And we would just get together with a bunch of comedians.
You know, I'd be on the show with like three or four other guys that I knew.
And it was always just, I looked forward to it.
It was just such a good time.
And I said, God, I miss doing that.
It's so fun to do that.
I wish I could do something like that.
And then I saw Tom Green set up.
Tom Green had a set up in his house.
And he essentially turned his entire house into a television studio.
And he did an internet show from his living room.
He had servers in his house and cables everywhere.
He had to step over cables.
This was like 2007.
I'm like, Tom, this is nuts.
Like this is, and I'm like, you got to figure out a way to make money from this.
I wish everybody on the internet could see your set up.
It's nuts.
I just want to let you guys know that.
It's not just this.
So that was the beginning of it.
It was just seeing other people do it and then saying, all right, let's just
try it.
And then so the beginning days, we just did it on a laptop.
Had a laptop with a webcam and just messed around.
Had a bunch of comedians come in and we would just talk.
Joke around.
And then I did it like once a week.
And then I started doing it twice a week.
And then all of a sudden I was doing it for a year.
And then I was doing it for two years.
Then it was like, oh, it's starting to get a lot of viewers, a lot of listeners.
You know?
And then I just kept doing it.
That's all it is.
I just kept doing it because I enjoyed doing it.
Was there any setback?
No.
No, there was never really a setback.
Really?
No.
It must have been.
It's not the same kind of story.
You're just resilient.
Or you're just tough.
No.
No, no, no.
It wasn't tough or hard.
It was just interesting.
So I just.
You were never once punched in the face.
No, not in the show.
No, not really.
Not doing the show.
You never did something that big blowback.
Nope.
Not really.
No.
It all just kept growing.
It kept growing.
And the thing stayed the same from the beginning to now.
And the thing is, I enjoy talking to people.
I've always enjoyed talking to interesting people.
I could even tell just when we walked in.
The way you interacted with everybody, not just me.
Yeah.
That's cool.
People are cool.
Yeah, that's cool.
You know, it's an amazing gift to be able to have so many conversations with so
many interesting
people because it changes the way you see the world because you see the world
through
so many different people's eyes.
And you have so many different people have different perspectives and different
opinions
and different philosophies and different life stories.
It's an incredibly enriching and educating experience having so many
conversations with so many
amazing people.
And that's all I started doing and that's all I do now.
Even now, when I book the show, I do it on my phone and I basically go through
this giant
list of emails of all the people that want to be on the show or that request to
be on the
show and then a factor in another list that I have of people that I would like
to get on
the show that I'm interested in.
And I just map it out.
And that's it.
And I go, ooh, I'd like to talk to him.
If it wasn't because of President Trump, I wouldn't have been bumped up on that
list.
No, I wanted to talk to you already.
I just think, you know, what you're doing is very fascinating.
I mean, how would I not want to talk to you?
And then today, it proved to be absolutely the right decision.
Well, you know, listen, it's strange to be an immigrant one day going to Oneida
Baptist
Institute with the students that were there.
And then here, NVIDIA is one of the most consequential companies in the history
of companies.
It is a crazy story.
It has to be strange for you.
The journey is, and it's very humbling, and I'm very grateful.
It's pretty amazing, man.
Surrounded by amazing people.
You're very fortunate, and you've also, you seem very happy.
And you seem like you're 100% on the right path in this life, you know?
You know, everybody says, you must love your job.
Not every day.
That's not, that's part of the beauty of everything.
Yeah.
Is that there's ups and downs.
That's right.
It's never just like this giant dopamine high.
We leave this impression.
Here's an impression I don't think is healthy.
People who are successful leave the impression often that our job gives us
great joy.
I think largely it does.
That our jobs, we're passionate about our work.
And that passion relates to, it's just so much fun.
I think it largely is.
But it distracts from, in fact, a lot of success comes from really, really hard
work.
Yes.
There's long periods of suffering and loneliness and uncertainty and fear and
embarrassment and humiliation.
All of the feelings that we most not love.
That creating something from the ground up, and Elon will tell you something
similar.
It's very difficult to invent something new.
And people don't believe you all the time.
You're humiliated often, disbelieved most of the time.
And so people forget that part of success.
And I don't think it's healthy.
I think it's good that we pass that forward and let people know that it's just
part of the journey.
Yes.
And suffering is part of the journey.
You will appreciate it so much, these horrible feelings that you have when
things are not going so well.
You will appreciate it so much more when they do go well.
Deeply grateful.
Yeah.
Yeah.
Deep, deep pride.
Incredible pride.
Incredible, incredible gratefulness.
And surely incredible memories.
Absolutely.
Jensen, thank you so much for being here.
This was really fun.
I really enjoyed it.
And your story is just absolutely incredible and very inspirational.
And I think it really is the American dream.
It is the American dream.
It really is.
Thank you so much.
Thank you, Jeff.
All right.
Bye, everybody.
Bye, everybody.