NVIDIA Corporation (NASDAQ:NVDA) Bank of America 2022 Global Technology Conference Transcript June 7, 2022 11:00 AM ET
Executives
Jen-Hsun Huang – Co-Founder and CEO
Analysts
Vivek Arya – Bank of America Securities
Vivek Arya
So while everyone is settling in. Good morning, everyone. I’m Vivek Arya. I lead the Semiconductor Equipment Research team here at Bank of America Securities. And before we get started with the session, NVIDIA just asked me to say that today’s discussions contain some forward-looking statements and investors are advised to read NVIDIA’s filed SEC reports for risks and uncertainties facing the business.
So with that really delighted and honored to have Jen-Hsun Huang, the CEO and Co-Founder of NVIDIA with us. Jen-Hsun needs very little introduction, but suffice to say that, NVIDIA under Jen-Hsun leadership has been an industry pioneer in driving the boundaries of artificial intelligence, gaming, cloud computing, robotics, I can go on and on, but really delighted to have Jen-Hsun with us sharing his perspective. So warm welcome.
Jen-Hsun Huang
Thank you, Vivek. It’s great to be here. Great to see all of you.
Vivek Arya
Yeah. So this is actually our first in-person conference after three years. So it’s…
Jen-Hsun Huang
My first conference in three years. That’s a lot of you.
Question-and-Answer Session
Q – Vivek Arya
Yes. So maybe Jen-Hsun, let’s start with the high level. So in the field of AI, why is it an important problem to solve? Why is it a hard problem to solve? And why do you think the GPU is the right way to solve it? Because we see a number of other approaches, whether it is custom silicon, right, whether it’s FPGAs, right? There are other companies making GPUs also. So why is it an important problem? Why is it a hard problem? And why do you think your approach is the best one?
Jen-Hsun Huang
Is that the one and only question, because that’s going to take 40 minutes? That’s an excellent question. It’s very important question. First of all, machine learning. Artificial intelligence is a computer that writes software itself. It writes software that no humans can write. And it looks for patterns and relationships from data and it creates a model that can then infer and make predictions from data that it sees in the future that it hasn’t seen before.
And so if you think about what I just said, the characteristics of machine learning, that it’s a computer that writes by itself, write software no humans can and it can make predictions about the future, you’ve got to ask yourself how important is that and how would you apply that and what kind of problems can you now finally solve a very first time that because we can’t write these type software there are no principle mathematics, no principle equations to make that prediction like, for example, there is no Maxwell’s laws, there is no Newton’s laws, there are none of those laws exist, no Shorinjiryu [ph] equations, none of those equations exist were can solve problems at the scale that we can with machine learning, maybe it’s because it’s multiple physics, maybe thermodynamic and fluid dynamics have to come together to solve a problem, maybe it’s multi-physics and it is impossible for us to have a simple numerical answer for it. And so for the very first time machine learning gives us the opportunity to solve those problems.
The implications to three large domains, I just characterize there is three large domains, it is utterly profound and you’re seeing the benefits of it. One is, of course, information automation, where we call IT. For the vast majority of the industry, the time that we’ve all been in industry and I’ve been — it hasn’t been about retrieval of information.
If you look at your data center, the vast majority of your data centers, bunch of computers connected to storage. It is about retrieval is about. It’s about storing files, retrieving files, sharing files, modifying files and its probably treatable.
And for the very first time, we can now take that data and infer some model from it, creates a model from it, a predictive model from it. And this AI has the ability to go into that data and figure out what are the predictive features, the predictive features, so you get a no different than an equation of force and you know that mass and acceleration are predictive features. It’s no different than Maxwell’s equation has predictive features.
Inside your company there’s a whole bunch of data and there’s no way for you to figure out what the predictive features are. Finally, machine learning comes along and it figures out what the predictive features are all by itself and they could create a model that could then make pretty predictions from. So information automation.
The second field which is very important is science automation. Some people have now made a claim that is too far that, because of AI, because of machine learning, it’s the end of the scientific method. I think the scientific method is sound and will continue to exist, but it will be augmented by machine learning.
And the reason for that is, you’re going to observe the world, you’re going to observe the world and use AI to go figure out how to predict the future. In the past, scientists use thought experiments and the thought experiments, Einstein, for example, didn’t observe anything. It was all completely based on mathematics and it’s based on thought experiments. Sometimes, it’s through observation.
And so you could say that, that when a scientist makes observations, intuitive observations and then figures out what the numerical first principle methods are building upon previous science, that method is now going to be augmented at scale and so I don’t think it’s the end of scientific method, but the scientific method is going to get boosted. It’s called physics ML, physics, machine learning.
The third area of great impact and you can almost look at the world and break it down into these three areas and you’ll see some really profound work being done. The third is, for the very first time, we can — we have the ability to write software, we have machines to write software, that can control really, really complicated things, like for example, through sensors, infer or perceive the environment, reason about the context, reason about the environment and its mission and its goals, and then come up with a set of plans.
What I just described of perception, reasoning and planning, is the foundation of intelligence, is the foundation of robotics. And so for the very first time, we’re going to see, because we have this new form of software, we are going to be able to automate industries, not just information, but automate industries and we can talk about these three areas.
So, first of all, the profound impact of machine learning is fundamentally that. The second, why is it hard? Well, first of all, the computational method of machine learning, ultimately, the model has — there are two computers that has to be built. There are two basic processes that has to be built for artificial intelligence. One is a computer that takes a whole bunch of data and — takes a whole bunch of data and find the predictive features and find the predictive patterns and relationships.
The relationships can be over space. For example, one pixel to another pixel, computer vision, it can be over time, one sound versus another sound, speech recognition or it can be over time and space video.
And so the finding relationship across and it could be in the frequency domain, for example, FFTs. It could be in physical domains. It could be information spatial domains. And so there’s a lot of different dimensions that you could learn from. So I think the first part is you have to create a computer that that can sufficiently learn from the data that is presented — all these different types of data that’s presented a predictive model.
That piece of software so that it can learn from the data, because it could be the number of predictive features could be in the hundreds of millions, whereas F equals MA has two variables. In the case of many artificial intelligence, the work that you do, for example, if you just think about how do you predict certain things, the amount of modalities of information that you have to bring in the dimensionality of the information that you bring in is utterly gigantic.
And to figure out how to take all of that information in, create an architectural model that could learn and predict from that data set without being over fit. Meaning it only saw on, you gave it an example of a fruit and that fruit is orange and it thinks that orange is the only fruit.
So it can’t be over fit on the one hand and the size of the data imply something about the size of the model, which implies something about the computer. And ImageNet went from a few million images to now companies are training on hundreds of terabytes moving to tens of petabytes of data and so the size of the data, the modality of the data, the dimensionality data and the model that you want to create to predict that data is proportional. So that’s the — that’s why it’s hard. And it’s the ultimate high performance computing problem.
Then the second problem is how do you now deploy that model into a world to make the predictions. In the case of mobile devices, that model is running in the cloud. And almost everything that all of you do every day, whether it’s doing search or shopping or video or playing TikTok or whatever it is, short videos, long videos, whatever it is, everything has a recommender system behind it. It is the single most important economic engine in the world today, is unquestionably the most valuable piece of software the world’s ever discovered. It’s worth trillions of dollars, because all you know and that recommender system is running in the cloud.
On the other hand, there’s another there’s another AI model that is developed in the cloud in the same way that I just described earlier. However, it has to be deployed at the edge. So a perfect example of that would be a self-driving car that has an artificial intelligence model that takes all of these sensors and — sensor input, it has to make sense of what it sees through a LiDAR or radars, or surround cameras, create a world model from it, localize everything around it, localize itself to it and then reason about what it should do, based on the mission that it has.
And so now that that computer is at the edge, notice the difference between the cloud computer that is doing inference or prediction, making a recommendation for you every time you click and you might be clicking once every second, in computer time, that’s a very long time. On the other hand, you have an edge device, like a self-driving car that has to make predictions in completely real-time. If it doesn’t make a prediction in real time, all the time, something terrible could happen. And so I’ve just described the two ends and how the computing model is radically different.
And then lastly, what GPUs. There’s several reasons why NVIDIA GPUs. First the — first of all, NVIDIA’s GPUs is not a graphics chip. About 20 years ago, we started the journey of making it a general-purpose parallel accelerator and the parallel accelerator created — we create a new programming model called CUDA, which is the most successful programming language the world’s ever seen. And you could argue it is the only parallel programming model the world’s ever seen.
Took us about some 20 years to make it happen and today is used in just about every field of science. It’s an every — is offered by every computer maker, every cloud maker. It’s used in computer graphics, image processing, to quantum physics, to quantum chemistry, to machine learning, to robotics. It’s the most popular robotics platform.
And so, anyways, why us? The first part is the technology. The technology is obviously very hard. The ability to not just run one or two threads of execution on a CPU, but to be able to run, orchestrate, manage tens of thousands of threads in one GPU. And in the case of a data center, one application is running across an entire data center, there are hundreds of millions of threads being orchestrated by this one scheduler. You just got to imagine what kind of a scheduler, what kind of a programming model that can take a problem and break it down into hundreds of millions of little tasks and then orchestrating all that. It’s sort of technology hard.
The second is machine learning is a complicated computing problem. It’s complicated at the algorithm level. It’s complicated at the compiler level. Remember, if you look at a neural network, it looks like a compute graph. Well, it is a compute graph. It’s a giant compute graph. It’s a compute graph with, well, we’re coming up on 530 billion nodes. There are only 7 billion people.
So our compute graph called Megatron has 530 billion nodes and those nodes parameters in it and the size of the computer, that size of software doesn’t fit in any normal computer. It needs a DGX computer to fit.
And so how do you number one run — write that software, and then number two, how do you run that software? And so the entire computing stack is hard. The computing architecture is hard to do Just imagine, we’re trying to write a piece of software that is, has all the characteristic characteristics that I just described, has data from all these different modalities, you have to ingest the data, it bring it all into system memory, you have to operate on it to find relationships across everything.
And so the computation model of the system, if you look at our systems, it’s a complicated CPU memory — CPU problem, GPU problem, memory problem, networking problem, system, bus problem, distributed computing problem, storage problem. It’s a problem everywhere. And so the invention, we reinvented the entire stack from the chip to the system, to the system software, to compilers, graph analyzers and graph optimizers, all the way up to the algorithms itself.
And so I think the answer to your incredibly hard question is that machine learning is the most impactful computing science problem that we’ve ever challenged. It has tremendous and profound impact on all the industries that we’ve mentioned. But you can’t do that, by just designing a chip, you have to do — you can do — you can only do this well by reinventing the whole computing science of the computer stack and that’s what we’ve been working on last 10 years.
Vivek Arya
Jen-Hsun, where do you think we are in that adoption curve, because other parts of the market that you think are starting to get mature where you might face more competitors. So if you could help us kind of think through where we are in the adoption curve? For example, what we do is we look at supercomputers, right, every six months that top 500 list and the number of accelerators there now is close to a third of all the machines. Is that the right way to think about where we are in the adoption curve looking at what supercomputers are doing, then where are we in that similar curve or, let’s say, hyperscalers or enterprises?
Jen-Hsun Huang
Yeah. That’s an indicator. But here’s the way I would look at it. It took you — there are four major data centers that — data center classes that all of us know and there are two new ones that are coming out. The data centers. Can you guys hear me the gentleman that back there told me to do this? Am I okay?
Vivek Arya
Yes.
Jen-Hsun Huang
All right. Thank you. I follow instructions really well. The four data centers and they emerged and they came into the world in this way. The first data center was a supercomputing center, right, and all create and so on, supercomputing centers. The second is the enterprise computing data center, IBM. The third hyperscalers, the invention of Hadoop in storage, computing, Yahoo, okay.
And then the next one is cloud computing, which is Amazon. Does that make sense? So I just described the early days of each one of the four data centers they exist today, they’re all quite large, each one of them added to the previous data center, because has a different need, it served a different purpose.
There are two new data centers that are coming in, you can tell that are different than all of the other four. The new one that’s coming out is what I call an AI Factory. An AI Factory does one thing, just like a factory does one thing, it could be manufacturing cars or it can be refining oil or whatever you want to, making chemicals or whatever it is, making plastic, whatever it is.
And so that factory does one thing, data, in this case, data comes in, it gets refined and what comes out as a model. Data is coming in continuously. It’s been refined continuously 24×7. And models are being updated continuously. It does one thing.
In fact, if you look at it, look at one of the most popular applications in the world and potentially the most disruptive new Internet application in the world, TikTok. There’s a factory that is refining the AI model continuously. It’s gigantic, utterly gigantic, potentially one of the largest data centers in the world. We’re building many, many of those all over the world. In my opinion, there 115,000 large factories of traditional industrial revolution times, now you’re going to see 150,000 giant factories and their job is just to refine data, create models, AI Factories. We’re in the beginning part of that.
If you look across all the companies that are doing things and you think to yourself, is this a service, is this an application that has a continuous ingestion of data and a continuous output of model. Well, we have one. We — NVIDIA run some of the largest industrial supercomputers in the world and they’re AI Factories and a completely revolutionising video.
We ingest data from a fleet of cars. We’re processing continuously petabytes and petabytes of data. And what comes out as an AI model for self-driving cars. We’re doing that in a whole lot of different fields and so that’s AI Factories.
And then the last data center, a new data center is, what I described earlier at the edge, every single factory, every single warehouse, retail stores, cities, public places, cars, robots, shuttles, they’re all going to have little data centers inside, they’re all going to be orchestrated by Kubernetes, they’re all going to be orchestrated from the panel from afar, you — all run containers, you’re going to OTA container — new containers to it, all of them are going to work together as a fleet. It’s going to generate the fleets memory. The memory is going to be constructed into some virtual world model, that virtual world model be updated continuously and that loop will just sit there and just run continuously. That’s a Factory. Okay and so you got the AI Factory and then you have the edge data center. These two data centers are brand new. We are in the early phases of the growth of that.
In the case of the hyperscalers, let me come back to your supercomputing question. 30% of the world’s supercomputers are now accelerated…
Vivek Arya
Top 500.
Jen-Hsun Huang
…under top 500 are now accelerated. That’s the install base. 80% of the new systems are NVIDIA accelerated. And so if you look at the install base number, it is 30%. If you look at the brand news systems that are created this year, about 80%, so over time that 30% will get larger and larger.
In the case of supercomputer, it’s actually fairly hard, it’s easy to move to get into the early parts of it, it’s hard to move the rest of it. And the reason for that is, if you look at the world supercomputing centers and the applications is running, it goes from quantum physics, to quantum chemistry, to, right, astrophysics, to molecular dynamics, to healthcare, our life sciences, physical sciences, astroscience — astrophysics, to weather simulation.
And so the tale of simulate — tale of algorithms is really, really long, thousands of applications. And that’s why, for certain supercomputing centers, you can move fast, because they’re pioneering ones, for the vast majority of top 500, it takes a long time, and we’ve been at it for 15 years, right? And so now, after 15 years, we are at 80%.
Now in the case of hyperscalers, that’s a little faster. I think that every single hyperscaler will be will be GPU accelerated or will have some kind of accelerator into their — in their computers. And the reason for that is because the vast majority the hyperscalers write their own software. And we contribute a lot of software to OpenSource [ph], we contribute a lot of software to them and then they have large IT teams that, computer science teams that can develop the software.
So they — when something new comes along, for example, the two major drivers, data center, NVIDIA’s data center businesses largely focused on acceleration, largely focus on machine learning. Our data center business is strong and it was strong last quarter, it is going to be strong next quarter. And the reason for that is, because we’re in an early adoption phase of machine learning across all of these data centers.
They could absolutely measure their earnings growth by investing in NVIDIA’s GPUs. And the reason for that is, because all of them have the ability to do this thing called AB testing. They have a digital twin. There’s a digital twin of all of us in every single cloud service.
And whenever they train a new model, they could predict what you have clicked on something more with this new model than you did with the last model. And because they know the click through rate and they can predict from the click through rate, purchasing rate or engagement rate or ad payments, because that algorithm is so clear cut and so well understood, they have the ability literally to model the impact of certain new enhancements to the AI on future growth and so they are very focused on it.
They want to enhance the quality of the service. They want to enhance the engagement rate. And at the core of this is this new model called Deep Learning Recommender System, DLRM. It is the single most important AI model framework in the world, but it takes a whole bunch of deep learning models to create the predictive features and then it creates and then just giant, giant factory. This is the most valuable computer science project in the world today.
The second one that’s most important is natural language understanding and what is known as a large language model. If you ever get a chance, look up in your internet, LLM. LLM is a very, very important thing. And it’s probably one of the great breakthroughs in computer science in the last three years. Last three years, the greatest breakthrough in computer science ever and very, very important implications to the future of computer of machine learning and AI, and how machines learn. Imagine a machine that doesn’t have to learn at all, but has common sense and it’s called Zero-shot Learning. There’s a whole bunch of new AI technologies.
Those two areas are driving just enormous investments in cloud service providers. And so we — you just — each one of the data center has its own adoption rate. It’s hard to generalize across all of it that, but here’s my prediction. I predict that you’ll have data retrieval, systems inside your company and they’ll continue to be enterprise data centers like today, but every single company ours has and every other company will to will want to invest in an intelligence generation data center and that intelligence data center will be 100% GPU accelerated.
Vivek Arya
Got it. Jen-Hsun, what do you see as the next phase for NVIDIA, because at the last GDC, right, and other events, you have described the move into software and subscriptions. How do you see that evolving for NVIDIA over the next few years? Is it additive to your business? Is it something you already do, but you would just be putting it into a formal umbrella or you think it’s a brand new growth opportunity for NVIDIA?
Jen-Hsun Huang
You can characterize everything that we’re doing. You can go on — if you — you can characterize our strategy in this way. The first thing that we’re doing is, we have to reimagine the computer — the computing system from top to bottom, we call that full stack innovation, chips, systems, systems software, algorithms and libraries. So that’s full stack.
The second thing is we’re the only platform in the world of any accelerators at all, aside from the CPUs and CPUs just does a very, very slowly. In the case of acceleration, we’re the only platform in the world that does end-to-end machine learning from ingestion of the data from the actual query of the data from a database to the processing of it called ETL. If any of you in the field of data science, ETL is fully accelerated with NVIDIA, whether it’s RAPIDS or Spark, and then we go into the training part of it to inference.
We are end-to-end. We’re the only platform that can train and inference any model that’s created. There’s a very important contest, it’s called ML Perf. We’re the only company that has ever, well, we’re the only company that finishes the test. We finished the test every time on training and the number of tests, it’s like, it’s harder than DSAT. But the number of tests is quite enormous. We finish all — we’re the only one that submits a result for every one of the test, for a data center, for edge for training and inference, every single model, every single time, we’re the only one that does it, even come close. So we’re end-to-end, we’re full stack, we’re in any model and so that’s our first mission is to reinvent computer science.
The second thing is to put our computing architecture in anywhere that people want to do computing. So I just mentioned there’s six different classes of data centers and they all have different stacks and they all have different needs and different networking, different storage, different provisioning. We’re the only company that has the architecture across all six, okay. So that’s the second mission.
The third mission is to invest in the reinvention of today’s information, technology, automation, but also build a foundation for the largest of all the opportunities, which I think is going to be industrial automation and putting AI literally everywhere. And so everything that moves will be automated. There’s no question about it, it’ll be safer, it’ll be easier to manage. And so the thing that we’re working on now, which is related to Omniverse, is that.
We also want to make it possible for companies, whether they have internal computer science organizations and IT organizations, large engineering organizations like clouds or enterprise to be able to adopt AI. Inside the company, the way we describe it is to democratize AI. We want to put AI in the hands of enterprises, healthcare companies, financial services companies, you name it, okay, and retail companies, large logistics companies, automotive companies, transportation companies, we want to put AI in everybody’s hands.
The only way to do that is to take software that we otherwise open source or put into GitHub or provide a source to CSPs to package that up into a licensable product. Because most companies don’t have the ability to go and cobble all of that complexity together from the algorithm level to the system software level, to the networking and storage level, it’s just too much. And it needs to be multi tenant, it needs to scale out and it needs to be secure. And so it’s just too much software to do and so we package all that up into NVIDIA AI. We package all that up into NVIDIA Omniverse and we have an Enterprise License.
The Enterprise License is $1,000 per note. And for us, there are 25,000 enterprises around the world that’s already using NVIDIA’s technology for AI. We now can offer them an enterprise software license that they could get direct support from us, access to our expertise, to learn how to use it and get maintenance and support.
And so that licensable product is a new product of ours. It’s off to a great start. I think it’s going to be it’s going to be probably one of the largest my estimation, one of the largest enterprise software products in the world. It has the opportunity, though, to ride NVIDIA’s go-to-market. So we have the opportunity not to have to build up a large enterprise sales force, because we will go-to-market on the competing platforms that we already sale and so I’m very excited about NVIDIA.
Vivek Arya
Got it. And since we have a lot of investors in the audience, I can’t resist but asks a little bit shorter term question.
Jen-Hsun Huang
Yeah.
Vivek Arya
Which is that, there seems to be this conflict where the semiconductor industry, right, sounds very strong…
Jen-Hsun Huang
Yeah.
Vivek Arya
… that, demand is not the problem, supply is the only challenge, right, because the lockdowns or other issues. Whereas the broader market, right, seems to be implying, we are headed into a big slowdown. So what’s on your dashboard? How do you perceive the demand environment and what kind of risks do you foresee over the next four quarters to six quarters?
Jen-Hsun Huang
If the slowdown results in loosening of supply chain, that’s good. Our strategy is to grow faster than the economy could be impacted. Of course, China and Russia has an impact on our consumer product, Consumer Gaming business. It’s impacted our Q2 by about $400 million. China’s in — China’s a significant market, Russia as a meaningful market for our Gaming business. However, Gaming remains solid, even in the face of China and Russia. Q1 sell-through was grew year-over-year over last year, which was a really fantastic year and so Gaming sell-through remain solid.
We are working on the single greatest opportunity in the history of computing. Artificial intelligence has been the holy grail of computer scientists for a very long time. For the very first time, we know how to in specific areas, specific areas of skills and things to automate, achieve superhuman levels, not to mention that, achieve global scale, because of cloud computing. The combination of machine learning and cloud computing is really quite tremendous.
We’ve now succeeded in creating API’s for information automation, recommender systems and language understanding, as I mentioned, in physical sciences, we invented a new physics model called FNO, that can be used for weather prediction, one that can be used for quantum chemistry and so we’re making tremendous breakthroughs in physics ML and revolutionizing science and then, of course, all of the work that we’re doing with Omniverse and robotics for industrial automation.
And so those — that work, I believe, and the foundation of our end-to-end full stack capability, we can bring and democratize AI for all of the industries that I mentioned. And with a company of our scale and our footprint and the reputation for being very good at this, I think, we have an opportunity for years of growth ahead.
Vivek Arya
Okay. I know I wouldn’t be able to go through the other 20 questions and the next follow…
Jen-Hsun Huang
Well, that’s, but — they’ll teach you to ask the ultimate first question. Whose first question is, why, why and why? What is the meaning of life and what is?
Vivek Arya
So maybe just in the last few minutes, Jen-Hsun, so you recently launched the Grace CPU? If I’m an x86 server CPU vendor, should I be scared or should I think of that as well, they’re only going after a niche or the market? What are your ambitions and plans over the longer term? Because, I remember meeting the first arm server company and they’re no longer there. So many people have tried, why do you think you will be successful this time and what is the ambition look like?
Jen-Hsun Huang
First, you can only make a computing architecture succeed if you have software ecosystem period. It’s all about the software. It’s not about the chip. There is a really important question about why we do what we do. So let me let me explain.
If we are a component maker of CPUs, memories, networking chips and storage chips, WiFi chips, USB chips. If we’re a component maker or a chip maker, it doesn’t matter at all that we have software. It doesn’t matter at all that we are full stack. And the reason for that is because it’s industry standard. And WiFi is WiFi, AV 2.11, right. Video chip, AV 1 and so, you name it, okay, USB 3. So industry standard is industry standard, x86 an industry standard.
If you build an Arm SoC for mobile devices and embedded systems, you’re good to go, the ecosystem is there. However, if you want to build a new chip for a new market that architecture is never been, you have no hope without an ecosystem and so that’s number one. If you build an Arm chip for data centers, you have no hope unless you have the data — you have the infrastructure and the ecosystem.
Number two, why are we building a CPU? We are — we buy a lot of x86s. We have great partnerships with Intel and AMD. For the Hopper generation, I’ve selected Sapphire Rapids to be the CPU for NVIDIA Hopper, and Sapphire Rapids has excellent single threaded performance and we’re qualifying it for hyperscalers all over the world, we’re qualifying for data centers all over the world, we’re qualifying for our own server, our own DGX, we’re qualifying for our own supercomputers.
And so we partner with the ecosystem and I buy everything that I can. As a practice, I buy everything I can. And the reason for that is, because I’ve got smart engineers who wants to invent the future, the last thing I’m going to go do is squandered their time and squandered their life’s work on repeating somebody else’s work. And so that is just the one of the core values of our company.
The reason why the world’s great computer scientists want to come work at NVIDIA is because we choose work for them that is singular. We choose work for them that has never been done before. So Grace is going to be an amazing CPU. And it’s not like anything that’s ever been built for.
It has the benefit of two things. One, it’s designed for a new class of applications that emerged in the last couple, two years, three years, that has proven to be really, really impactful. I mentioned two of them, recommender systems and large language models. These two applications have such giant data spaces that that wants to be accelerated, that unless you do some something new, you’re just going to have lots and lots of bottlenecks or just cost too much. And so Grace is going to help solve that.
Grace has the advantage that in every single application domain that we go into, we have the full stack, we have all of the ecosystem all lined up, whether it’s data analytics or machine learning or cloud gaming or Omniverse, digital twin simulations or in all of the spaces that we’re going to take Grace into, we own the whole stack. So we have the opportunity to create the market for it.
And so I think the — what NVIDIA does for a living, as you know, is chips that are industry standard. I think those are all terrific. What we try to do is build platforms that open new markets, whether it’s the market we recently opened up on operational research or the work that we’re doing in quantum chemistry, quantum physics, of course, robotics, of course, industrial automation and AI and things like that. In each one of these, we create the whole stack so that we can open markets and those markets are important. We’re really passionate about it. We’re very good at it and that’s what really makes our company special.
Vivek Arya
Great. Terrific. Thank you, Jen-Hsun. Really appreciate your time. Really appreciate your insights. Thanks everyone for joining.
Source: seekingalpha.com