Welcome to Remarkable People. We’re on a mission to make you remarkable. Helping me in this episode is Dario Gil.

Dario is an innovation trailblazer ushering in the quantum age as IBM’s pioneering Director of Research.

Dr. Gil oversees 3,000 scientists across fields like quantum, AI and semiconductors that are radically redefining computing. By leading IBM to first open universal quantum access through the cloud, he unleashes this exploratory science’s immense potential to elevate business and research.

How will emerging tech revolutionize innovation? What is the vision beyond closed Silicon Valley models?

Tune in now to hear this prominent voice illuminate coming transformations!is an innovation trailblazer ushering in the quantum age as IBM’s pioneering Director of Research.

Please enjoy this remarkable episode, Dario Gil: Visionary IBM Chief of Research.

If you enjoyed this episode of the Remarkable People podcast, please leave a rating, write a review, and subscribe. Thank you!

Follow on LinkedIn

Transcript of Guy Kawasaki’s Remarkable People podcast with Dario Gil: Visionary IBM Chief of Research.

Guy Kawasaki:
Hello, I'm Guy Kawasaki and this is Remarkable People. We're on a mission to make you remarkable. Helping me today is Dario Gill. He's IBM's Director of Research. This means he leads a team of over 3,000 scientists and engineers in really redefining technology boundaries. Among his many accomplishments, he's pioneering the accessibility of quantum computing through the cloud.
Gill's strategic vision also encompasses artificial intelligence, semiconductor technology, and exploratory science. The whole goal is to ensure that IBM remains a leader in tech innovation. Beyond his corporate role, Gill advises governments and chairs a COVID-19 computing consortium. This consortium is trying to leverage supercomputing for essential medical research. He champions open innovation and advocates for collaborative roles that extend beyond Silicon Valley.
I'm Guy Kawasaki, this is Remarkable People. Today speaking with me is Dario Gill, a visionary shaping our technology future. So, let's begin. Your title is Senior Vice President and Director of IBM Research. What exactly does that mean?
Dario Gill:
What it means is that I'm responsible for running the research division of IBM that is called IBM Research. And as you may be aware, it's one of the world's great industrial research organization. Has been around for close to eighty years. And just like there was Bell Labs and many of these wonderful places, I'm responsible for that tradition and continuing that on in the company.
And on this VP role and the Senior Vice President, it reflects that beyond IBM Research also, I have a plethora of responsibilities as an officer in the company across IBM, generally speaking, guiding the technical roadmap and the technical community of the company across IBM.
Guy Kawasaki:
Does this mean that there's this group dedicated to research, but in all the various divisions, there's also research going on for that division? Do they work for you also? Is it a matrix organization? Or each division has their own researchers too?
Dario Gill:
So, there is only one research division across IBM that is meant to conduct that forward-looking, more scientific oriented technical development across a variety of areas, from semiconductors to artificial intelligence, to quantum, to cryptography and so on.
And then indeed within the different business units, IBM infrastructure, IBM software, there's obviously a very large development population with tons and tons of brilliant developers and engineers. And then we work hand-in-hand in combining the research advancements and the development capabilities to bring exciting new products to market.
Guy Kawasaki:
When you guys invent something, do you then turn it over to the divisions to actually implement and commercialize?
Dario Gill:
That kind of model used to be to some degree, that was a model that you had a development organization, and a research, and then you created something, and you transferred the baton, so to speak to the other one. I would say my experience now over the last ten plus years, fifteen, is that model has changed.
And now because the compression between invention to productization is so much faster compared to any other time that I have witnessed, it ends up being a model of co-ownership. Where the way to look at it is you have different skills, you have complementary skills.
And then we all work together to bring those innovations faster and faster to market. And that has taken different institutional forms. You're right, like matrix management is a corporate invention to deal some of those things.
I'll give you an example. Within the modern world of AI with generative AI, and we launched this year the watsonx platform that allows you to create these foundation models and deploy it for enterprise use cases. So, IBM Research has the responsibility of creating and building those models, the foundation models, and software has a responsibility of the overall product around that.
But it's not like we're saying, "Here are the models, you go and build them." We continue to retain that responsibility because that is so intimately tied to pushing the limits of algorithms and research that we maintain ownership even in the context of a product strategy.
Guy Kawasaki:
Backing up for a second, just FYI earlier, Ginni Rometty was on this podcast, so I know her story. And she started as this sales rep, sales engineer, and one day became CEO. How did you start? Were you a poor immigrant, third generation American? What's the back channel story?
Dario Gill:
My back channel is I grew up in Spain in Madrid. And my entire family lives there still. So, I'm the youngest of four brothers and we grew up in the city center of Madrid. And my parents were very keenly interested that we got exposed internationally to different experiences. So, starting by the time I was ten years old, I spent summers abroad. And the first three summers when I was ten, eleven and twelve, I was in Ireland and learning English.
So, the idea was learn languages. And then I went to France, and to Italy in the summers, and so on. And eventually, we all ended up spending even though the youngest of the brothers, all four brothers spent time in the United States as an exchange student. So, the senior year of high school, we all spent abroad. And I was in Los Altos High School, so I graduated from Los Altos High School.
Guy Kawasaki:
What?
Dario Gill:
Yeah.
Guy Kawasaki:
You graduated from Los Altos High School?
Dario Gill:
I did.
Guy Kawasaki:
Oh my god.
Dario Gill:
I did. Yeah. That was your 1992, 1993. Yes. And I still have many wonderful friends from there. So, anyway, long story short, it was through that journey that I began my experience in the US. And then later when I was in grad school, I was in the East Coast, as part of that journey, I married a Mainer. And so, my wife grew up in Maine and she has been a long time, I think she was the thirteenth generation from Maine or something like that.
And I ended up staying here through that journey. And now I have two daughters. One is twenty years old and the other one is seventeen, and they've grown up in the US, obviously seeing family in Spain and so on. But my journey is through the educational experience then becoming an immigrant experience through education here and becoming an American and so on. So, that's been my journey.
Guy Kawasaki:
And what was your first job at IBM?
Dario Gill:
My first job was I joined what was at the time something called the Semiconductor Research and Development Center. So, basically it was the team that developed advanced semiconductors. And my very first job, my PhD at MIT, I was at the Nanostructures Laboratory, where the focus was around creating nanotechnology.
And I had specialized in doing a lot of work on lithography, which is the main technique by which you print miniaturized structures, and print transistors, and so on. So, I joined the lithography team in IBM Research that was responsible for what was at the time doing a transition in terms of the new machines that were going to be used for production.
And the way that semiconductors had gotten smaller and smaller was by shrinking the wavelength, basically the size of the wavelength that is used to print the transistors on the wafer. And at the time I was hired to explore this new technique called 157 nanometer Lithography. That ended up not working.
And so, when I joined IBM, within a few months, that program was on its way down. And there was a company in the Netherlands that we were working called ASML, which is a very well-known company now. And it was developing this new technique of putting water in between the last lens element and the wafer, and it was called immersion lithography.
So, I was the first IBMer to work on immersion lithography. And eventually that became the mainstream production technique to make all the chips, all the advanced chips in the world. So, that was my beginning.
Guy Kawasaki:
And is that what you got your first patent in?
Dario Gill:
Yeah, it was my first patent at IBM. I had filed some patents while I was at MIT working with some of my fellow students and my faculty.
Guy Kawasaki:
Now, I read an interesting thing that IBM, I'll paraphrase it, maybe I got this wrong, but IBM used to keep score by how many patents they got each year. And you no longer do that. So, if that's true, why are you no longer doing it? And then how are you measuring innovation now?
Dario Gill:
Great question. Yeah, there's a long heritage. And it was never the only metric, but IBM was indeed very well known for having the most patents in the United States every year. And I think we did that for close to twenty years in a row. And when Arvind Krishna became the CEO of IBM, as part of the technical leadership team, we were revisiting and re-evaluating many dimensions of the company, but also including how we work as a technical community.
And one of the things that we have detected that even though the patents and patent portfolio are incredibly important as part of the overall management of the technology innovation lifecycle, is that we were too heavily skewed in terms of the amount of time that we would spend within the engineering organization, the amount of effort that you got to put within the legal prosecution of those patents and filing and so on.
And that we wanted to have a more balanced approach of how much we spend on open source technology, how much we spent patenting, how much we spend developing prototypes. So, it was really a productivity discussion. And what we decided is, look, we want to have one of the healthiest patent and innovation portfolios in the world for sure.
But we don't have to have this goal that continuously that we have to be number one on that because what we want to be number one is on the innovation capacity and our ability to commercialize the technology.
Guy Kawasaki:
What do you measure now?
Dario Gill:
Yeah, so basically we measure across all those different dimensions. So, look, from a research perspective, I'll tell you how we evaluate the performance of the research division is twofold. One is business outcomes. So, we are able to measure, hey, what is the revenue on profit contributions of past technologies that research has developed and have ended up in commercial products?
And you get a chance to be able to have an attributed revenue and attributed profit. You don't have to measure it to the last dollar, but you get a sense of the ROI of what went in and what came out.
And then we also measure the scientific and technical eminence, and contributions to it. And we measure that by publications. We measure that by citations. We measure that by contributions to open source. We measure that by patents. We measure that by awards and recognitions of our scientists, and other dimensions around that.
So, we have a group of all of those metrics that we're able to compare ourselves with other peer institutions in industrial research and academia. And just between those two, the business outcomes and the scientific and technical eminence, it allows us to evaluate the health of IBM Research from a outcome perspective.
Guy Kawasaki:
When you're measuring ROI, just as a rough number, from the time you're done until it's in the hands of customers is how many years?
Dario Gill:
It really varies by technology area. But sometimes on the short end, it can be a year to two years out. On the long end of the spectrum, typically the sweet spot we see a lot of it is three to five years is the way we try to optimize the portfolio. But then we also have technologies that we have been incubating for a long time at a beta different scale.
A good example of that is quantum computing. We have had a quantum information science program in IBM Research since 1971. For a long time was a very small group, was Charlie Bennett and some of the very pioneers of quantum information, for which the field is known, where here in IBM Research, it was a small group. It wasn't the core priority of the company. But there were people who were trying to reimagine what information was.
Now if you fast-forward, they had no intention to making into a product, nor were they thinking about that this was going to be a product. So, it took actually many decades until we started to see that there was a technology that could be incubated. And that really started in the early 2010s. And now that's been greatly accelerated.
So, I'm giving you a completely different arc of how long something can be. And then there's other things that we do where we make an innovation or a capability, and it may be in a product within a year. But to give a short answer to the question, I would say we optimize most of the portfolio to mature within the two to five year timeframe.
Guy Kawasaki:
Wow, that's short. In many interviews, I have to interview people who are in areas that I understand and I often tell them, "Listen, I don't want you to think I'm stupid. I'm going to ask you a question I know the answer to, but my listeners don't know the answer to." So, I'm going to ask you this dumb question. But in this case, Dario, I don't know the answer to this.
So, I'm going to ask you what may seem like a dumb question, but I really don't know the answer. It's just can you explain quantum computing? Because man, that is over my head. I read this analogy of a coin can be a head or a tail, but in quantum computing it can be a head and a tail. And I understand that, but can you explain this?
Dario Gill:
Sure. So, look, I think to understand that it, it's useful to have a contrast to the world of classical computing. So, what are modern computers all about? I would say the story of it, to understand it and to understand the difference, I would date it back to Leibniz, the great German philosopher, co-inventor of calculus. He was enamored of this idea of the digital representation, the binary.
He developed this method where you could take whatever information you wanted, any number you wanted and make it into a string of zeros and ones of binary digits. And he was enamored of that idea that he says, "Boy, and if we could apply and develop reasoning techniques on top of it, maybe this would be the path to understand the entire world. We could represent information that way and reason over it."
Fast-forward to the 1940s when Claude Shannon took those ideas and expressed it in mathematical form. And the idea of the binary digit, the bit was born. And it was the basis of modern day computers. Put all the information in the world into zeros and ones, and there was a companion technological idea which was a transistor.
And that you would have a switch that would represent either a zero and a one. And Moore's Law has been the story of integrating more and more of those transistors per unit area, such that the exponential growth of the number of transistors we can print has digitized the world.
Now let's contrast it to quantum computing. In quantum computing it begins by saying actually the more fundamental representation of information is not the binary, it's not the zero or the one, but rather an idea called the qubit, the quantum bit. And we are going to borrow some ideas from physics, the idea of superposition, interference, and entanglement to create a richer representation of information.
An example of a richer representation is to imagine for example, that instead of having two states, zero and one, imagine you have a sphere for a second. Imagine in your head. And that in the binary world you get to have a zero if the point in the sphere is in the North Pole or you have a one if it's in the South pole. So, you only get to be on the North Pole or the South Pole. But now in this world, I get to be anywhere I want in the sphere.
So, for example, I could be on the equator. So, in the equator I have a fifty-fifty combination of zero and one, but I really could be anywhere. So, there is a way actually, and now I'll get to how it physically manifests itself. There is a way to represent an information that has any kind of linear combination of zeros and ones.
Now, a quantum computer, it's a machine that allows you to create that richer representation and it allows you to manipulate qubits in the same way that a classical computer manipulates, transistors manipulates zeros and ones. So, a quantum computer allows you to represent information more richly and then do very clever tricks like constructively interfering information to get peaks or canceling information such that the information disappears from the system.
So, it operates entirely different. But at its most essential summary is you represent information differently. That information has an exponential capability compared to the classical representation of information. And quantum computers are the machines that create those qubits, just like classical computers create bits and manipulate bits.
Guy Kawasaki:
I got about 30 percent of that. Okay. How do you apply quantum computing to COVID research? How do you go from that kind of power to what, vaccines or whatever you do for COVID research?
Dario Gill:
Yeah, so then the second part of the story of quantum computing is like why does it matter to represent information differently or compute differently? And the key insight here is that as amazing as the computers we have today are, they actually struggle to solve certain kinds of problems. And the problems that they struggle to solve are things that the number of variables that you have to compute over are exponential.
So, let's go back to your question on COVID research as an example. It turns out that if you try to use machines to simulate the physical world, the world of chemistry, the world of physics, the world of biology, and so on, the complexity present inside those elements in there, let's say a chemical compound and so on, is such that even though we know the equations of physics of what they govern them, the number of calculations we have to do is exponential with the number of electrons present in the system.
So, what we do is basically you want to discover a new antiviral drug as an example, or you want to develop a new battery technology for electrification, or I want to develop a new material that doesn't corrode for the wings of airplanes. All of those approaches, basically you use a scientific method to discover something new.
And you could try by trial and error by experiments, or you could try to what is known simulation, to simulate to use a machine, a computer to calculate how the material would behave. But as the materials get more complex, it gets so difficult that the only thing that we can do is to approximate the answer.
So, what quantum computers promise to do is that since nature obeys the laws of quantum mechanics and quantum operate according to the laws of quantum mechanics, quantum computers will be very efficient to simulate the natural world, to calculate properties of the natural world. And now we can use quantum computers to solve problems that with classical machines, with normal machines, the best we could do is approximate the answers.
It is the contrast between things that at best we can approximate the answer to doing calculations much, much more accurately. The end result of this is that our goal is to accelerate scientific discovery by a factor of ten, a 100, maybe even a 1,000 times to really accelerate the scientific method by solving problems we couldn't do any other way.
Guy Kawasaki:
So, you're telling me that a quantum computer is going to spit out the right vaccine at the end of this process?
Dario Gill:
Spitting out the right vaccine requires lots of steps in complex R&D methodology. But they are in that journey of the R&D process. There are certain calculations are essential for one, to discover the right molecular compound, or binding agent, or you name it. And that those properties, some of those properties are impossible for us to calculate efficiently with regular machines.
And this will give us more accurate and faster time to solution. But one has to recognize is the truth for AI too, that it always exists within a workflow, and that workflow has many steps and many components around that. But hey, if part of those steps you couldn't solve the problem and now you can, that may be the difference between something, a vaccine emerging, or a material emerging, or none at all.
Guy Kawasaki:
When somebody wins a Nobel Prize for coming up with the COVID vaccine, had quantum computing been around before? Are you saying that quantum computer could have come up with that vaccine, it wouldn't have to be these two people? Where does creativity and insight come in when the world is quantum computing?
Dario Gill:
So, look, no, I think we're always going to need the capability of, for example, in this context of the scientists, and the researchers, and the many stakeholders that come together to make something like that happen. In the end, technology is a tool that gives us improved methods, improved productivity, improved problem solving, improved accuracy.
And so, I would view that the advances that we have in computing; semiconductors, artificial intelligence, and quantum are going to be some of the most powerful tools that we're going to be able to put in the hand of scientists to accelerate the way they conduct science. I think the method itself continues to be the scientific method. But the tools we use to conduct a scientific method are changing dramatically as a consequence of the revolution of computing.
So, one consequence that we are seeing is that the way we educate our scientists now, more and more incorporates computational techniques as part of their education. Whereas before maybe only the computer scientists and maybe electrical engineers were the ones that were being very exposed to it.
Now if you're a chemist, or a physicist, or an anthropologist, now you're getting exposed more and more to computational techniques because everybody's finding it incredibly valuable as a means to conduct your work.
Guy Kawasaki:
Okay. Can we switch to the topic of AI now? I hope you don't end the interview when I ask you this question. But let's talk about Watson. And from the outside looking in, I have to say, I asked myself, why isn't Watson ChatGPT? And why isn't IBM OpenAI? What happened there? In my mind? You had this enormous lead and then one day you woke up and everybody's using ChatGPT and no one's using watsonx. Maybe that's an exaggeration, but what happened there?
Dario Gill:
Yeah, so first let's understand. It's a good question by the way. And the first thing to recognize is the field on the arch of AI has been going on for a long time. I know from the public perspective, it seems like the field was invented last year with ChatGPT, but it dates back to 1956. I say that specific date because I was at famous Dartmouth conference that got the name artificial intelligence started.
And so, the aspiration of having machines that are capable of demonstrating traits of intelligence, it's a scientific discipline that's been going on for a long time. IBM has been involved on that journey for a long time. The term machine learning was coined by an IBMer, believe or not, Arthur Samuel, in the late 1950s and so on.
So, we've seen many journeys around that. So, not specifically to Watson. In 2011, there was a research project inside IBM Research that competed against Jeopardy. That system that competed against Jeopardy successfully and could do open domain question answering using the latest techniques on natural language processing was affectionately called Watson after the founder of IBM. And that's where the thing came about.
And then there was indeed an effort, and IBM at the time said, "Look, this field of artificial intelligence is really seeing a renaissance, a new moment." And it went down the effort of applying and trying to apply these latest advances in natural language processing into the world of business. So, one thing I would say is that a company deserves credit for understanding that AI applied to modern AI, applied to the world of business and so on, was a market worth pursuing.
Now, the reality of it is that the maturity of the technology, despite the compelling demonstration in Jeopardy, was still in the early stages. And the reality of it is that it took, given the approaches of how AI was built with a lot of supervised learning, where there's a tremendous amount of training examples that one had to create for the neural networks to create use cases.
I think the expectation versus technology maturity, that ratio of the curve wasn't quite right. So, then fast-forward to today, what has happened? So, what has happened, the reason why we live this ChatGPT moment is because in the community there was a massive advance given in this case actually by the invention of something called the transformer that happened at Google, interestingly enough, not at OpenAI, where there was a mechanism to train these large scale neural networks in such a way that you didn't have to generate so many handcrafted examples to get the system to learn.
And thanks to that, what has happened, the most important element of what is happening in AI is that the creation of what is called foundation models of which large language models are a subset of, has change the methodology on how AI can be created. And basically what you do is that once you have highly capable foundation models, you can then create downstream use cases and applications deriving from that foundation model with ten, a 100 times the productivity of what you could do before.
So, now we have technology that can really be used for a whole variety of use cases, for example, writing code and testing code in ways that is going to have a huge impact in the world of business and so on. So, what is IBM doing? Back to your question. So, there was a Watson 1.0 story that we just talked about.
And there's a Watson 2.0 story which we launched this year called watsonx. And that is a completely new technology stack that allows you to train, fine tune and deploy an inference foundation models for generative AI. It's composed on a platform that does the data lake required for it, the models and the governance of the AI.
And basically the short story to what you asked, which is a profound question, is the market now and the technology has reached a point where it actually works at scale. That's what the world has experienced in the consumer domain with ChatGPT.
But what IBM is doing is leading the effort of bringing that technology wave to the world of business, which is our core mission around that. And watsonx which is an entirely new platform, is getting a huge amount of traction and effort in bringing now that world to the enterprise. So, basically that's the long story of what we learned, what's changed, and now is a new day for the world of AI in IBM.
Guy Kawasaki:
What's watsonx going to do that ChatGPT cannot do?
Dario Gill:
Yeah. So, I'll tell you a couple of things, but I will start first by the distinction that we give to our clients of the difference between being an AI user and an AI value creator because it's at the core of our strategy. If you are a business, let's say you're a bank, or an energy company, or you name it, yes, you should use AI as a user, whether it's a black box, you get to use it and improve your productivity. Okay. Great.
Now that is going to be a new normalizing baseline. Lots of people will use the technology and a new baseline of productivity will emerge across all business. That's not differentiating, it's just a new water level. There is a difference though, to be a value creator, which is you are going to go on the journey of creating business value with AI.
That is not just about using a black box or a third party model. It is about you learning that a foundation model is a new representation of your data. And if you in your business have data as a competitive advantage, you are going to want to participate on having your own versions of foundation models to accrue value over time.
And what we do that others don't do is we allow our enterprise clients to actually create value by giving them the technology, the transparency on the data that was used in models. We fully indemnify our clients on the models that we provide, the IBM foundation models that we create. And we create a whole environment that is entirely devoted to the needs of the enterprise, including the governance required to be able to audit the models.
So, if you work in a regulated industry, we produce the equivalent of a nutrition label for your AI, such that when the regulator sits with you, you can say, "This is how my model was trained, this is how it was benchmarked, and this is why it is safe to deploy."
So, in summary, we give them the ability to create value with AI where we don't restrict to just being users. So, it's a partnership model and is a value creation model that is fundamentally different. And we give them the governance require and the transparency for them to operate safely in an enterprise context.
Guy Kawasaki:
So, are you saying to use a metaphor that in an enterprise, yes, people in enterprises use PCs and Macs. And that's the end user. But the core of the value is not in the PC or the Mac, it's higher upstream. And that's what IBM focuses on?
Dario Gill:
Yeah, definitely. Because the core of a business in the end, I mean in addition of course to their talent and their people, ends up being on the data that they own and on the workflows that they operate to run the business. How efficient are they in doing R&D? Or how efficient are they in doing supply chain? How efficient are they in doing sales? How efficient are they in doing marketing?
So, for us, we've always had that very workflow-oriented lens. And the purpose is to bring them technology and to bring them skills, such that the overall productivity of the workflow increases. And our value proposition is I bring you software, I bring you infrastructure, I bring you consulting, and I bring you those capabilities such that your business operates better.
Now, it may not be what the person on the street knows as part of the challenge around that is what does IBM do and so on because we're not a consumer company. But behind the scenes, if you look at the financial institutions, the banking of the world would not work without IBM. The telcos of the world would not work without IBM. The airlines of the world would not work without IBM. Why?
Because behind the scenes, for example, we created technology that processes 70 percent of all the transaction processing in the world. We enable them to be successful about doing all the stuff that is behind the scenes that makes businesses run and governments run all over the world. That is what we do. And it's a combination of software infrastructure and skills that we bring to the table to them.
Guy Kawasaki:
And what happens when you combine that kind of AI goal with quantum computing? Isn't that the holy grail?
Dario Gill:
That is the holy grail. Indeed. I'm glad you asked that question. So, I often remark that we're witnessing and we're living in the most exciting time in computing since probably the 1940s, since the emerging of the first digital computers. And the way I summarize it is through this pseudo equation of bits plus neurons, plus qubits.
And what I mean by that is a way to remember that what is really going on is the power of the world of semiconductors and pushing the limits of having more and more bits, more and more transaction capability, more high precision computation, neurons is the neurons underneath the neural networks that embody artificial intelligence.
And qubits the world of quantum computing. And the reason I frame it like that is that the part to remember is also the pluses, is the combination of the technologies. The combination of pushing semiconductors, pushing artificial intelligence, and pushing quantum, and combining all of that in a new computing architecture.
Very often the discussion gets centered around what does each of those domains do? But I think perhaps the least understood piece is the fact that on this decade we're going to witness their convergence. Like you correctly said, you're going to see that our software will say, "Hey, this part we run for high precision.
This part we run with AI. This part we run for quantum." And it's going to be behind the scenes, people won't even know. But we will witness an exponential advances in computational power and the problems we can solve because we're using and combining all the different architectures.
Guy Kawasaki:
Okay, this is a question that's going to show my ignorance then. I often read these stories about how Nvidia has sold 500 million dollars’ worth of AI chips. What you described with semiconductors, doesn't that just blow Nvidia out of the water?
Dario Gill:
I think that what Nvidia deserves a lot of credit is for recognizing and determining that exploiting semiconductors, but that a new architecture to do the math was incredibly important first in the world of graphics. So, it started in the 2000s as a means to accelerate graphics rendering.
And then there were clever scientists that says, "That thing that you're doing for graphics in my AI problem, in my deep neural network in deep learning, I also do a lot of matrix multiplication. Let me use your accelerator, the GPU, instead of the CPU, instead of the traditional processor that you use for PCs and so on, and see how well it does." And when they did that, they actually showed, "Hey, for doing these neural network calculations, it actually is much, much better than the CPU."
And Nvidia was very astute in paying close attention to how people were using their chips. It wasn't their original intent, they were just doing graphics, but they were very close to the community as and how people are using the things. And they were able to detect that this community, this burgeoning AI community was using their accelerators to do AI. So, that was the journey 10 plus years ago of going through that.
But basically is they take advantage of semiconductors. But instead of building a general purpose computational machine like a CPU that we use in a PC or in a phone, or so on, they said, "I have a very specialized thing, a new architecture to do the math differently." And the performance of that math is what has powered the world of AI.
So, it's by being so many people have talked about the end of Moore's Law, what happens at the end? At the end, the journey is, you got to create specialized architectures. You got to give up the dream of general purpose machines and actually say, "What does each piece of architecture give me?" And that's what's happening with AI and accelerators.
Guy Kawasaki:
But fundamentally, Nvidia is still in the world of ones and zeros.
Dario Gill:
That's right. So, you're completely right. And the path of using ones and zeros to simulate the world of quantum is a dead end. That has no path. So, that's why quantum is not to be understood as just a continuation of the current computational trends, but rather as the first time that the category of computing is branching.
Meaning from now on, we will have something known as classical computers and that will include all those accelerators from Nvidia, and AMD, and others. That will include all the CPUs that we built. And then there's a completely different paradigm called quantum computers.
And with quantum computers, you don't get to use classical hardware to do quantum computing efficiently. That's not the path. And so, you're right, it's a radically different architecture that you require completely different hardware to do.
Guy Kawasaki:
Is there going to be such a thing as a quantum computing laptop? What is it?
Dario Gill:
No, these quantum computers are very different. For example, the ones we build at IBM, they operate at cryogenic temperatures. So, the quantum processor is one of the coldest places in the universe. So, it's about a 100 times colder than outer space. So, it operates at fifteen millikelvin, very, very close to absolute zero.
So, yeah, that's not super convenient to put on a laptop. You have to have a specialized environment. But the good news is that we're all accustomed to computing also through the cloud, where you have a lot of hardware that sits on a data center. And the networking latencies are so good that yes, in your end device, your phone, your laptop, you are going to benefit from quantum computing. But the quantum calculations are not going to happen in your laptop.
Guy Kawasaki:
Okay. Good to know. Getting a little philosophical here, do you think that AI or AI plus quantum computing can save the human race?
Dario Gill:
I don't like to give so much attribution to the technology by itself. In the end, the responsibility of whether we construct a good society and whether we solve problems faster than we create them is the responsibility of humans, of each other, what we owe to each other, and how well we organize governance with each other through our governments and through the institutions we create.
Having said that, I think technology plays an indispensable role in being able to help us navigate the complexity of the world and of the challenges we confront.
In a little bit of a reductionist view, it may be fair to say that without it, we don't have a path to solve those problems. And in that sense, you could imagine attributing it all the special powers and therefore, that's the most important thing in the world. But I would argue that one could make the same argument about the rule of law or many other dimensions that without it, it's also impossible for us to solve the problems that we have at scale.
That's why I'm a little bit nuanced around that. But I do think that artificial intelligence and quantum computing as an example, will be indispensable tools that we're going to need to be able to solve the most fundamental challenges and problems that we have as a society, but by no means just by themselves.
Guy Kawasaki:
Well, I don't know if we differ or I'm just more fed up, but I truly believe that AI is going to save humanity because humanity cannot save humanity. But I'll give you a really tactical example. So, I asked ChatGPT, should America teach the history of slavery? And it gives me six reasons why teaching the history of slavery in America is a good thing.
Now, if you were to go to Florida or Texas and ask your random legislature that same question, you'd get a very different answer. I think AI is smarter than people right now. And I also think it's more empathetic.
And to go to an extreme, Dario, which you're going to find maybe bizarre, I think that AI is God. I really do. I'm being semi-facetious here. But I think God says, "I gave these dumb ass humans too much control, too much self-determination. I really blew it. Now they're not willing to accept an all-knowing, all-wise, all-knowledgeable super force like me.
So, instead, because their simple minds can't handle that, I'm going to make my manifestation AI so they believe they invented it, and they can go forward." Now, that might be a little too weird even for you, but I really believe that God is AI.
Dario Gill:
Yeah, I hadn't heard that perspective articulated recently. But let me just make a point, Guy, on what you said about the example that you gave about teaching whether we should teach the history of slavery and so on. Let me just observe, that's a highly human engineered system. And that what you're observing in the context of that answer from ChatGPT comes from the process of supervision by humans about to what the writing quote answer is.
And that you will see them. And sadly, we're going to see lots and lots of examples of ChatGPT-like technology that does not give you that answer that gives you every possible answer that one wants. And what people will come to understand is that there is no magical AI system behind the box, but rather a system that allows you to ingest massive amount of data.
And depends on how you align it, how you tune it, what you want it to generate, it will generate a whole variety of human preferences and answers on topics that engage different perceptions of what the right answer is. So, it's not as if there was this AI behind the scenes that is a single AI that has the manifestations of what you're referring to as a God-like creature that is giving you this holistic, comprehensive view of the world, and it gives you always the right answer.
But rather we're going to come to immediately appreciate if people haven't already done so, that behind the black curtain, is tons and tons of people, and engineers, and other people making choices as to what you get. So, I don't think we're going to get to that point that you're referring to in AI. Even immediately, people will start seeing how much of a choice is the humans behind it on what that answer was.
Guy Kawasaki:
I have to say that I worked at Apple, that's the largest company I've ever worked at. And the last topic is, it is just fascinating how IBM started so long ago. And they have so many talented, smart people and they've impacted the world so much. Who could have ever thought that one company could so impact the world? And yet many people don't even know what you've done. What a great company.
Dario Gill:
It's true. I think it's a good observation. And your point is, IBM is the only company on earth that has been involved in this field of computing for over 110 years. So, I like to say that the story of IBM is a story of computing and its relationship to professionals in the world of business and government. If you look back at the history of it, even things as foundational to our modern democratic governance, the social security administration.
Things like implementing a census and being able to then allow and have a system that could send checks at the moment of retirement, and that the numbers were correct, and everybody got the right amount and so on. Those were enablement technologies that created core pillars are the basis of what binds us across generations, for example, in the United States.
From there to sending a man to the moon, and the Apollo program to their personal computer, which you have journey and your own history in Apple and the relationship with IBM to semiconductors, to artificial intelligence and quantum.
So, it has that long, incredible arc of adapting. We've not always done everything perfectly at any given inflection point, but it endures. And it endures because of its commitment to R&D and because of its commitment to serving our clients and having that mission.
But you are right that the person on the street, especially I would say following the divestiture of the personal computer of the PC business, they just don't touch IBM in that way. So, the way they touch IBM is behind the scenes. But if that doesn't get explained, articulated, or seen, and people are busy, they have their daily lives, they don't have to understand how everything works, it's sometimes very hard to appreciate it.
I love the company and what it's done and all the challenges. But Guy, I appreciate the sentiment that I think it should be something that is more celebrated. But because it's typically behind the scenes, it's not top of mind in the same manner that other companies that you're interacting daily on your consumer experience.
Guy Kawasaki:
Well, there is no doubt in my mind that IBM may be the most underappreciated company in the world for what it's done, truly. And believe me, when I started in the computer business, the whole purpose of the Mac division was to put IBM out of business. How naive we were.
Dario Gill:
At the time when you were doing that, it created a good healthy motivation which produced an amazing thing, which is Apple.
Guy Kawasaki:
My last question, and I found this one of the most interesting things that Ginni Rometty said, which is she did some kind of review of the open positions at IBM and decided that many did not actually require a college degree. Now I realize I'm speaking to someone with a PhD from MIT, so you're the extreme of that. But as you look over the sphere now, how important is formal education?
Dario Gill:
One can answer that question in a variety of ways. I'm going to continue to defend that formal education can be incredibly powerful and important. So, I guess part of the answer is it depends what Ginni was defending, which I think is right, is that one should be wary of credentialism.
And that what one should always be evaluating, and measurement is the skill is your ability to carry out whatever you're seeking to do. So, if I am formulating this is the kind of skills that I need to carry out the cybersecurity mission or to write code, or sell and so on, be wary of associating your ability to do that with credentialism, which showing me the credential that does that.
I think that point is a really important point. And we have done a lot of effort to remove arbitrary credentials when they're not needed to be able to deploy, to do the job and to hire and so on. And I think that's a super healthy important thing to do.
However, that is not to say that formal education is not important, because formal education and the reason we have educational institutions and whether it's in elementary school, middle school, high schools, college, graduate school, is because it provides you an environment with teachers, and professors, and fellow students to learn from one another and actually engage in the rigor of different academic disciplines.
That's not always great. It has issues sometimes, but I also defend that as an incredibly important repository of knowledge of learning and expertise. And in my context, obviously you are giving a different example, I don't think I could have done that on my own by osmosis. I have the benefit of universities and the educational system that allowed me to become who I am. And I think I'm the case like 100s of millions of other people.
So, that's what I would say. But I think we run, there's a great book also related to this theme from Michael Sandel on something that is related to this, which is he called it The Tyranny of Meritocracy. And in there he talked about the corrosive effect that we can even have as an expression of the limits of this. When the people that go through the formal educational system, particularly elite institutions, end up thinking that their success and their knowledge is solely theirs.
That they deserve entirely this aspect of it because of a meritocratic system. And they don't appreciate the element also of luck and the element of privilege that might be present around that. And that the extreme push of that dimension of it can create a very corrosive effect in society. And so, this is, I think, part of this debate and the tension around allowing the genius that we see in the human experience to express and manifest itself no matter how they acquired.
Guy Kawasaki:
I would say that one has to look no further than the US Senate to understand the tyranny of what you just said. So, Dario, thank you so much. It's been so interesting. And you've about quintupled my understanding of quantum computing, although it started very low.
Dario Gill:
Well, thank you, Guy. Thank you for inviting me and I really appreciate everything that you do through your podcast, and being able to bring different ideas to your audience.
Guy Kawasaki:
So, when we meet our maker, Dario, and she says, "Dario, you were wrong. I was AI." Don't say, I didn't tell you. Okay?
Dario Gill:
At the very minimum, I would expect her to say, "I was AI and quantum." How about that?
Guy Kawasaki:
Okay. Okay. Putting all modesty aside. I hope you enjoyed this little visit inside the research and development of IBM. What an awesome organization. I'm from Silicon Valley and I'm so used to the story of two guys in a garage, two gals in a garage, a guy or a gal in the garage. But it's not clear to me, two guys, two gals, a guy in a gal in a garage are going to master crypto, and semiconductors, and quantum computing, and AI. And that's why the world needs people like Dario Gill.
I'm Guy Kawasaki, this is Remarkable People. And as I've told you for 100s of times, we're on a mission to make you remarkable. Which by the way has one real manifestation right now. Which is, Madisun and I wrote a book called Think Remarkable. I hope you will check it out.
In fact, we have a special promo going on right now where if you pre-order it before the launch date March 6th, we will immediately give you access to the online version so you can get a jump on being remarkable. Just go to ThinkRemarkable.com to take advantage of the pre-order offer.
And now let me thank all the people that made this episode possible, Marylène Delbourg-Delphis for introducing me to Dario. Then there is Jeff Sieh and Shannon Hernandez, the sound engineering team. Madisun and Tessa Nuismer, the Dynamic Nuismer sisters. Madisun is producer and co-author; Tessa is researcher extraordinaire. And finally, there's Fallon Yates, Luis Magaña, and Alexis Nishimura. We are the Remarkable People. Team until next time, mahalo and aloha.