Hello, this is Guy Kawasaki. Welcome to the Remarkable People podcast. I meet with Steve Wolfram every ten years or so. It takes that long to recover from interactions with him because his brain operates at approximately ten times the speed of mine. He attended Eden and left without completing a degree.
He attended Oxford and left without completing a degree. Academic misfit? I don't think so. After all, he then went to the California Institute of Technology and got a PhD in particle physics in a year. All this happened by the time he was twenty.
Wolfram, by the way, is the youngest MacArthur award winner, at the tender age of 21. But did you know that Wolfram tried to revolutionize the game of cricket? Keep listening and you'll hear how. If I explain all the things he's accomplished in math and physics, it would be longer than the interview part of this podcast.
If you're into math or physics, you may have used a software application called Mathematica. I met him because of this program. I was one of Apple's Macintosh software evangelists, and my job was to help Macintosh developers like Wolfram.
You are probably using his computational knowledge engine, AKA, ‘search engine’ for us mortals--It's called Wolfram|Alpha, and it's used by Bing, DuckDuckGo, Siri, and Alexa. If you’ve seen the science fiction movie Arrival, Wolfram and his son helped create the alien language for the movie.
We met at my house when he was on tour for his latest book, Adventures of a Computation Explorer. I didn't anticipate meeting him at my house, so I had to rush and wash the dishes so he didn't think we lived like slobs. This was going to be the first and probably last time a MacArthur award winner would be in our house.
I'm Guy Kawasaki. This is the Remarkable People podcast, and now here's Steve Wolfram.
Guy Kawasaki: Tell me about growing up in England in the 60’s. What was it like? Your memories...?
Steve Wolfram: Yeah, I mean, I don't know whether I was “typical” growing up in the 60’s kid in England. I mean, I got really interested in space because space was a sort of thing that was happening at that time, and that was a very American oriented thing.
I mean, in England at that time, people--I don't know, the U.S. seemed like a sort of pretty far-off place, but I think I got sort of--it was at any given time in history, that's sort of a most exciting thing that's going on in the 1960s that was space. So I was, like, following fifty years ago now, following the Apollo eleven landing and all this kind of thing.
I got interested from that in sort of, “How does this all work?” and that got me interested in physics. So I started kind of started reading books about physics and so on, and I discovered this amazing fact that you could just go to a library and find all these books and start learning stuff.
And there wasn't really any constraint. I mean, I went to good schools, kind of top schools in England.
Guy Kawasaki: Yeah, I'd say!
Steve Wolfram: Yeah, it’s fame has gone up and down over time. Now it's back being famous again because all the Prime Ministers are coming from there and things.
Looking back, I did pretty well in school, so-to-speak, in the sense that I was probably, like, maybe even the “top kid” sort of in terms of the scholarships and things I had got. I was sort of in the “top kid” or top few kids in the country in the end, but I didn't really recognize that at the time.
And I was mostly just interested in learning stuff on my own and kind of spent a lot of my time learning about physics and doing things related to physics.
Guy Kawasaki: Any sports? Any? I mean, were you just the nerd?
Steve Wolfram: Uh, I didn't do sports. I was--I actually had elaborate schemes for avoiding doing sports. I mean, for example, Cricket was a big thing and I, a few times, kind of ended up playing Cricket. And I discovered that Cricket has this thing called Overs. When the whole sort of field is reflected and people sort of change the positions, perhaps because they're getting so damn bored standing around in the Patoma.
I discovered that there were these positions that were sort of invariants with respect to those overthinks--you could just stay in one place and just hang out there. I remember the one time a friend of mine got me to be in some sort of, I don't know, some cricket team-type thing.
I discovered that, in Cricket, you're supposed to throw a ball in this overhand, crazy thing where it's very hard to get it to aim correctly. But I--if you just roll the ball along the ground and line it up, you can actually get it to go more-or-less than the right direction.
And so I did that once. And the person who was batting, the ball just slid underneath their bat and got them out. And they were like, “That…you can't do that.” It's like, “Is it against the rules to throw the ball on the ground? No, no it's not.”
But this person said with great, sort of, emotion, “But it's not Cricket.” So this was the one--that's kind of a phrase that's commonly used: It's not Cricket. I actually got to hear it in real-life, in a relevant setting. Just once.
Guy Kawasaki: So you were applying physics from the very beginning, even in cricket.
Steve Wolfram: Yes. Yes.
Guy Kawasaki: So I read a little tidbit that you were having difficulty learning arithmetic. Can that possibly be true?
Steve Wolfram: Oh, yeah. I'm terrible at arithmetic. I just found it kind of boring. And in one of these kinds of educational lessons about education, so to speak, when I was, like, seven or something, there was always this kind of game of who can do arithmetic facts.
And I discovered that there's only one factor you needed to know to win that game most of the time, which is that seven times eight is fifty-six because that was the one fact nobody else would know. But I never learnt the other ones. And I actually kind of--I have a decent memory, so I remember roughly when I learnt, like, six times nine, I think I learned in my forties.
Guy Kawasaki: Come on…
Steve Wolfram: Yeah, I didn’t think I needed to know that!
Guy Kawasaki: Come on! Steve Wolfram couldn't memorize math? Well, not math, but arithmetic?
Steve Wolfram: Arithmetic. Yeah, never found it interesting. Yes, arithmetic.
Guy Kawasaki: Yes, arithmetic.
Steve Wolfram: Yeah, I never found it terribly interesting. And, it’s just like, occasionally you'd need to work out: what's six times nine? Okay. I can figure it out. It takes a few seconds to figure it out, but it doesn't, it's not, it wasn't, it was something where it--I don't really have a great excuse. It’s just something that I never found that interesting.
Guy Kawasaki: This is obviously before Wolfram|Alpha.
Steve Wolfram: Well, yeah, but you see that there's a cause for a relationship because I was not keen on doing math, but yet I was really interested in physics, and to do physics you have to do a bunch of math. And so I was like, “How can I do physics I want to do without having to do all this boring math calculation that I don't want to do?” And so that got me interested in, “How do I use computers to do this?”
And the kind of funny thing now that from me as a kid, having sort of the thought, “I don't want to do this,” I end up spending decades, building tools that kind of have got to the point where, kind of for the whole world, it's like, “You don't really have to do this anymore.” So it's kind of nice.
Guy Kawasaki: I love it. So do you think that maybe we should teach physics, then math. Is that possible?
Steve Wolfram: I don't think physics is the thing. I think computation is the thing that is really, kind of--it's the paradigm of today's world. I mean, just as a few hundred years ago, kind of, there was a big deal when people realize that you could use math to figure out stuff about the world.
And that's what led to modern physics, modern engineering, things like this. The big thing in today's world is we can use computation to figure stuff out, and turns out, you can teach computation, the ideas of computation, which is different from programming, we can talk about that later.
But these sort of ideas of computation, you can teach, and it's very nice because it's very, kind of, self-learnable, and it doesn't have the feature that math has where somebody says, “Oh, what is seven times eight?” And you say, “Oh, fifty-five. Oh no, that's not right. Okay, it's fifty-six.”
Okay, you kind of just have to be told what's right, whereas if you're doing things computationally and you're using a computer, you tell the computer, “Do this.” The computer does something totally crazy. You can see yourself that something happened that was wrong, and isn't some teacher telling you that it's something that you can see for yourself and kind of have the interaction yourself.
I think that the teaching computation is a great way into teaching, kind of, sort of, systematic thinking and so on. And, in fact, if you learn a certain amount about computation, a bunch of the things that people say, “Oh, that's so abstract. That's so hard to understand,” about math become really quite easy to understand because you have a, sort of, concrete foundation for actually thinking about the things and exploring the things and so on.
Guy Kawasaki: I bet people listening to this, and I am too, are wondering: How do you define computation then? Just to make sure we're on the same page.
Steve Wolfram: So I mean, it's: What is computational thinking? As far as I'm concerned, it's organizing your thoughts clearly enough that you can explain them to a sufficiently smart computer.
Guy Kawasaki: Okay.
Steve Wolfram: So that means, if we're saying, I don't know…Well, like, here's a type of problem that's sort of a computational thinking problem. Okay. So let's say you're given a point on the earth, given its latitude and longitude, and you're asked the question: you’re going make a map, and you're going make a default zoom level. That term, okay?
So if the thing lands in the middle of the Pacific Ocean to fault zoom level of a mile, it's pretty stupid, it's just a bunch of blue ocean there. If it lands in the middle of Manhattan, a mile might be a pretty good, or maybe less than a mile, is a good default zoom. So the question is, how do you figure that out?
You might say, “Well, let's look at the density of people around that place,” or “Let's look at how many features there are on the map around that place.” These are things that you can kind of think about computationally, and then you’re kind of a defining, well, what do I actually want to know?”
Do I want to know the density features? Okay. How do I define the density features? Well, I can say that's, I don't know, something like the number of geometric primitives that occur in that region of a map or the compression of the map that you can get or something like this.
And that's computational thinking; It's figuring that stuff out. Turns out, interesting thing because it’s sort of a little hobby I end up interacting a bunch with kids, talking about these kinds of things, and kids are pretty good at this stuff. You have to teach them the kind of language to communicate that to an actual computer to get it to do it.
But this kind of seems like common sense, trying to organize one's thoughts in a way that could be explained to a sufficiently smart computer. That's something people find--different people do it in different ways, but it's something people are sort of intrinsically able to do.
Now, one of the problems with, sort of, traditional math in the abstract form is it's very kind of cold. It's very unclear what's going on. You're just told it works this way.
X plus one is equal to one plus X. Okay, maybe somebody can prove that that's true, but it doesn't feel connected to anything that one can normally think about. And, I think that this whole area of computation is for math.
One of the things I find talking to kids is I'll say, “You learned a bunch of math, so where did you use that math in your, sort of, general life and times?” They’ll think: “Well, actually, I've only used it in the math classes.” That's kind of a bad thing, right?
Guy Kawasaki: Yeah.
Steve Wolfram: But then you start talking about, well, how can you use computation? And for every kind of area, there's kind of a computational X that you can talk about.
So it might be computational art history, or it might be, I don’t know, computational magic or something, or it might be computational marketing, whatever. These are all things that one can use the kind of paradigm of computation on, and they’re things that people, sort of, engage much more with the things that people think about.
Guy Kawasaki: But what if you're totally dependent on computers? You can't do seven times eight and get fifty-six without a computer. That's what a skeptic would say, right?
Steve Wolfram: Yeah. And we're totally dependent on lots of stuff in the modern world. I mean, the one thing that's really advanced in human history is, kind of, technology and the amount of automation that we have.
And there are plenty of things that your typical kid these days couldn’t do. Couldn't drive a stick shift car, probably, in the U.S., right? I know my four kids, I think one of them can drive a stick shift car. I think these are things where, it's sort of an interesting thing in education because more and more is known in the world.
And so you might say, “How can we possibly educate people because there's so much more known? I mean, how can it only take twelve years, or something, to educate people?” But the reason is because we are abstracting more and more. You don't need to know all the details of every possible case of this-or-that because there's a, sort of, general principle that you can learn about that.
And so it's the same thing with all these different sorts of math. Math is as an example of that. I mean, the concepts of math worth learning is a matter of how to think about the world. Math is also, as a field, it is probably the single, most developed kind of intellectual area, which has had the most kind of layers of work done on it.
So modern math is built on hundreds of years of, kind of, intellectual development in a way that's more of a, kind of, tower than pretty much any other field, and it's an impressive achievement of our civilization. It's not what people learn about in elementary math, but there are kind of, it's--if you want to learn, sort of, intellectual history, it's a kind of an important area to understand, but that's not what typically is taught.
Guy Kawasaki: This is completely going down a rabbit hole, but do you know the story of Joshua Bell playing the violin in the Metro in Washington D.C.?
Steve Wolfram: I do not.
Guy Kawasaki: So Joshua Bell, the violinist. They sort of dress him up as a homeless person, put them in the Washington Metro and he's playing, obviously--Joshua Bell is Joshua Bell, and they watch what happens.
Did people stop? Did they listen? Did they give him money? Nobody. And the lesson is, if he were in a concert hall, everybody would be standing in line, paying hundreds and thousands of dollars to listen to him. But in the Metro, no one can put two and two together.
They can't judge his music without the context. The reason why I tell you is because what you should do is disguise yourself and go apply for a job at Google. And then when Google gives you one of these computational kind of questions, you can just smoke the answer because you're Steve Wolfram! I mean, what a great answer that would be!
Steve Wolfram: I'm not a big believer, I have to say, in the assessing of people because I spent--I've now been running my company for 33 years, more than half my life. My wife often reminds me of that. After you've been a CEO for more than half your life, it has all kinds of terrible effects.
But I think one of the things I've managed to accumulate wonderfully talented people, and I have to say that sort of test, “Can you run this algorithm on a whiteboard or something?” No. Bad idea.
Guy Kawasaki: What's a good idea?
Steve Wolfram: You just talk to people about what they know about, and my principal is: if I’m talking to somebody, if I spend time doing some interview and at the end of it I can't answer the question, can this person--what will this person do in the job that I'm imagining they would do if I'm still mystified? If I still don't really understand the person, then I won't hire that person.
It's like, if I think I understand them and I can see how they'll actually work in what I want them to do, then that's a good sign for me.
Guy Kawasaki: Fair enough. So a few more questions about your youth. So you're at Eaton-- you're the top of everything. Who are your heroes at the time?
Steve Wolfram: That is an interesting question. I wasn't really a very hero-oriented character. I mean, I think I, I was…No, I didn't really have, I mean, it's probably--I wanted to be a physicist at that time, so I knew about a bunch of the famous physicists of then and the famous physicists of “before then” so to speak.
I think by the time, I mean, I started meeting those people by the time I was fourteen, fifteen years old. And I think as soon as you start meeting people, the whole sort of concept of a distant hero, kind of disappears.
I was just like, these are people that they, some of them, they seem to know what they're talking about. They seem smart. Some of them didn't seem so smart. It was…yeah, it's bad. It's something, I suppose I--it's one of those deficits. But I think, no, I didn't really have a particularly--I also didn't really have kind of a role model.
I just wanted to, sort of, generically be a physicist. And that was kind of a generic role model, so to speak.
Guy Kawasaki: So at twenty, you get a PhD in physics. At twenty-one, you win a MacArthur award. I can’t even wrap my mind around it. So what is that like? I mean, at twenty-one, you've accomplished what…?
Steve Wolfram: Yeah, I mean, it was kind of fun, getting my--in retrospect, I should have made the minor effort that it would have taken to get my PhD while I was still a teenager so I could keep saying for the rest of my life, “Yes, I had a PhD when I was a teenager,” but it was kind of, for me at that time, I was very focused on just: I want to do science; I want to figure stuff out.
So it was like, let me get to the point where I can just do that. I don't want to be messing around, taking classes, which I never really did, it was those kinds of things. I just want to go and do science because that's what I'm interested in.
I think some people on the outside were kind of like, “Oh gosh, what an awkward situation to be in. You're getting all these awards and things. You're young,” et cetera, et cetera, et cetera, but I wasn't taking these awards terribly seriously. I mean, it wasn't like I was thinking, “Oh gosh…”
So one of the traps, I think, for people is they do well early on and then they think, “Oh, there are these huge expectations.” They've got to do this and that and the other. I think my own expectations, and perhaps even, I would say, my own opinion of myself was always higher, I think, than those that would attributed to me from the outside world.
Well, no, I mean, it was one of these things where, as I say, the--I mean, perhaps the thing that, at that time, I got my PhD and then I'm like, “Okay, I'd had this goal from when I was, like, ten years old or something, to be a physicist. Okay, I'm twenty years old now. I'm a physicist. Great.”
So then it's a question of sort of the “what next?” And so I kind of started thinking, “Okay, I'm going to make this longer term plan. What do I want to be doing?” And so the first thing that I realized is I need, sort of, better tools, these computational computer tools to do the things I wanted.
So I'm like, “Okay, how am I going to get these tools?” Well, I was interacting with the various groups that are built, sort of, experimental versions of these things, but eventually I decided I just have to build the stuff myself.
So I got into the within a couple of weeks of doing my PhD, I kind of got into designing this big computer system that was a forerunner of things that I've done more recently. And then I had to like, “Okay, I've got to, sort of, officially learn computer science,” and so on, which was a lot easier in those days because there was a lot less known.
It’s just, “keep going, doing things.” I mean, I think that was the--I wasn't really thinking, “Oh, it's so cool that I'm in this place at this time,” I mean…Okay, I did things, like, I got “doctor” on my credit card, which I still have! Back in those days, it was much rarer.
And so people--you'd go and check in for a flight or something and the person would say, “I've got this ailment. Can you tell me, can you help me with this?” And it's like, “Sorry, wrong kind of doctor.”
Guy Kawasaki: Did you call up American Express or Visa and say, “put my title”? How does that work?
Steve Wolfram: I think you just fill it out in some form. I'm sure that's what I did. I haven't thought about it since, so to speak. It's, at this point now, if I'm in some particularly business setting, the really, the kiss of death is if somebody refers to me as “professor,” then I know that they’re absolutely not taking me seriously.
Guy Kawasaki: Why is that?
Steve Wolfram: Because it's like, I've spent some large part of my life actually building stuff in the software industry, and so it's like…
Guy Kawasaki: As opposed to just studying it or teaching it.
Steve Wolfram: If I'm in some random business meeting with some CEO of some company and they refer to me as professor, it's like, “This is dead,” because it's a--they think I'm some crazy, intellectual, academic nerd, not somebody who actually builds software that people use, type-thing.
Guy Kawasaki: And makes money.
Steve Wolfram: Oh, yeah.
Guy Kawasaki: And makes money. Yes. So, can we fast forward a little to Mathematica? Would you say that it is a product? Is it a theory? Is it a philosophy? What is it?
Steve Wolfram: Well, so Mathematica is very much a product. Okay. I think the thing that Mathematica is based on this thing called Wolfram Language. It’s more of a thing, so to speak, which gets deployed in different products.
So Wolfram Language is what I've basically been working on for, well, at least a third of a century. And kind of the goal is to sort of encapsulate as much kind of computational intelligence and as much kind of computational knowledge about the world as possible into this language that we humans can use to express ourselves, and that we can explain to computers what to do with, so to speak.
And so I view it as being, sort of, in the long view of history, the kind of computational language that I've spent all this time developing. It's kind of like this is sort of attempt to have a definite notation for talking about computation.
So back a long time ago--talking about math again--a long time ago, 400 years ago or something, long before we were around, if you were doing math, you were writing it out in words. There wasn't a notation. There were no plus signs or no equals signs, things like this.
And people had a--it was not easy to, kind of, make systematically communicate about math in those days. And then mathematical notation got invented and that led to algebra and calculus and all of these kinds of mathematical sciences that exist today.
And the analog of that today, I think, there's the things I've been trying to do with computational language. Can we have a kind of notation for computation that people can understand, in this case it didn't happen with math, but now machines can also understand and that we can use to kind of crispen up our thinking and our communication about computational kinds of things.
So that's the, in a sense, is the kind of intellectual, abstract version of what I've been trying to do for a long time, and it has many, sort of, consequences and connections to things, which, yes, are kind of philosophy. But one of the things that I always find fun is when one goes from these very sort of fancy, abstract intellectual ideas, and then it's actual code, and then it's an actual product that people use all the time and I think that's really neat.
And what we see a lot is things which used to be sort of pure philosophy, turn into code and products and so on. And so, for example, one thing I've been interested in more recently is, is something which people lost, tried to look at about 400 years ago, 350 years ago maybe, which was these things called philosophical languages, which is kind of how do you express things about the world in an abstract way without using a specific human language?
And so a place where this comes up is things, in modern times, is things like legal contracts. So people, when you write a legal contract, it's more or less an English if you're in the U.S. or something, but it's sort of in legal-ease because you don't want it to be quite in something as vague as English.
You want to be something a little bit tighter. But the question is, can you make a kind of an abstract, symbolic language that can express what you want to express in a contract? A computational contract? And, yeah, we're getting close to being able to do this.
And that's something you can expect these things to get executed autonomously with block chains and all kinds of other things. But basically, it's part of the sort of going from sort of the philosophical idea that there's a representation of meaning that isn't just words in languages, but there's something deeper that turns into this very practical thing about computational contracts and all that kind of thing.
Guy Kawasaki: Okay. I have to ask you to tell us the story about Steve jobs and helping, or telling you, to name Mathematica, Mathematica.
Steve Wolfram: Well, it's funny because we now have this product called Wolfram|Alpha.
Guy Kawasaki: Yes.
Steve Wolfram: And the original name for Mathematica was Omega. So over the course of many years, we went from Omega to Alpha, but Steve was--I started interacting with him actually pretty soon after we had very early versions of Mathematica because he was going to bring out the Next computer, and it was, sort of, oriented towards education.
And so we kind of made this deal pretty early on to bundle what would be called Mathematica on the next computer so everybody who got one next could use Mathematica. Turned out that was some, I think there was a pretty good deal on both sides. It's pretty smart of Steve to figure out that was a good idea.
Guy Kawasaki: Yes.
Steve Wolfram: A bunch of people bought Next because of that. Bunch of people used our stuff because of that.
Guy Kawasaki: You were the killer app of Next.
Steve Wolfram: Yeah, thanks. I mean, I think there was some good footnotes to history. Like, there were these computers that were bought at Cern, the particle physics center in Geneva, Switzerland, that were bought by the theory group there because they thought it was a cheap way to get Mathematica, so to speak, was to buy the whole computer.
And then those computers, the person who was responsible for that system was a person called Tim Berners-Lee.
Guy Kawasaki: I've heard that name before.
Steve Wolfram: Right, who ended up using those computers to build his first a web set up. So it's kind of like--that was kind of an amusing footnote of history that came out of that. But in terms of the naming of products, I had thought of the name Mathematica but I thought it was too long, too ponderous, et cetera.
And I had this whole list of other names. It's kind of funny. I put that list on a web some number of years ago. What's kind of funny is all these names, including really horrifyingly awful ones, have actually been used as products in the intervening years.
But Steve was like--he had a kind of theory of naming at that point. He was--which was, “Take the generic word for something,” and I think he said, “Romanticize it.”
So he was very--he was the example of Trinitron, which was a now long lost brand, probably from Sony…
Guy Kawasaki: From Sony.
Steve Wolfram: Which was a television brand and represented the three cathode ray tube guns or something in a, in a color television, which younger people who are listening to this are probably never heard off.
Guy Kawasaki: Right!
Steve Wolfram: It makes me feel old. But in any case, so at that time, the sort of the killer app of the thing we were building was maths for mathematical computation. I mean, in the end, the bigger picture is sort of all about computation in general, but at that time it was kind of--math was the first killer app for that type of approach.
So Steve was like, “You’ve got to call it Mathematica because it's kind of like math,” but it's sort of romanticizing that word. And so I was like…
Guy Kawasaki: Did he do this in a civil manner or did he just tell you that?
Steve Wolfram: No, it was actually perfectly civilized to me. It's just like,” I think you should do this.” It wasn't kind of a petulant, “You've got to do this,” it was, “I think you should do this.”
No, I always had very, very, very civilized interactions with Steve, actually, and it was funny because I also liked the fact that he would sometimes--I would tell him something and he would say, “I don't care,” and then, sometime later, he would say, “Actually, I do care,” and that happened actually with Wolfram|Alpha.
I first showed it to him before we released it and so he says, “I don't know why I care about this.” Okay, so then a little while goes by, this little company called Siri that some had licensed our stuff and put a wrapper around it and made it, sort of, a thing that could be thought of as an intelligent assistant rather than what we had the use-case that we were primarily dealing with asked questions on the web and so on.
And then he looks at this and he says, “Okay, now I get it,” type thing. I remember there are a bunch of things that when we were working on early, first versions of Mathematica and I'm interacting a lot with Next, there were all kinds of, “just be more ambitious,”-type pieces of input from Steve that were nice.
Guy Kawasaki: Did he at any point tried to explain math or physics to you?
Steve Wolfram: No, I don't think so. You know, it turns out I know somebody who knew Steve in high school, okay, and the person I know who knew Steven high school is now, well, a physicist, actually. So I was just recently at University of Washington.
But he was like, “Yeah, Steve was always kind of a weird person in high school, but he was one of the kids who would go to some other nearby high school,” which my friend also did, “and go learn calculus and things.” So I kind of knew that little bit about Steve, that even though he was like, “I don't know anything about this math stuff,” et cetera, et cetera, et cetera, he actually had learned calculus when he was in high school so that was one of those kinds of out-of-band pieces of information that I happened to have.
Guy Kawasaki: I fully expected you to tell me a story of Steve trying to explain physics to you…
Steve Wolfram: No, no. Oh no, actually. I mean, I'm trying to remember. I mean, he was--a lot of times, in the tech industry, there are people who are, really quite, I would say, intellectually interested in a lot of these kinds of science things.
And sometimes, what's different about science and technology is that, in technology, you're just building stuff, and you can build whatever you want. In a sense, people may care about it, people may not care about it. And science, for better or worse, there's an actual world out there, and you can't just make stuff up, it has to be--if you want the science to really mean something, it has to correspond to how our universe happens to work.
And so, I think sometimes the mentality of the sort of the people who are used to the tech world is a little different now. Having said that, I have to say that some of the science that I've done, and that I'm actually about to start doing again, is very much of the tech informed approach to science that perhaps is methodologically a bit different from the kind of, “Oh, we've got the big universe out there. Let's just pick away at it and try and find what's happening in pieces of it.”
My approach has tended to be: There's a whole kind of giant, sort of, universe of universes, and can we explore this whole much broader space of things which include things that aren't our particular universe? And then, is our particular universe an example of these? And that's kind of a more, somewhat different approach.
Guy Kawasaki: Is this what you refer to as the “new kind” of science?
Steve Wolfram: It's related to that. It's kind of an outreach--outgrowth of that. I mean, the new kind of sciences is all about--it kind of started from this, we're back to talking about math again.
But in the tradition for the last 300 years, basically if you are doing exact science is: Write down a mathematical equation that represents something about the world, and that was what Newton and Galileo--all these people, that's what kind of made them famous, was doing that stuff. It's been kind of a successful thing for 300 years.
What I got interested in, long time ago now, is, “Okay, so there are things we can't explain using things like mathematical equations.” There are things even in physics but even also in biology and other places. It's like, “How can we generalize what we do and to something more general than just mathematical equations and still be making precise theories of things?”
And so what I realized is, well, you can use programs instead of equations to represent how things work in some particular systems. So you say, “This system, it's a program that says and operates according to these rules. This is how the system is going to work.” Those rules may not correspond to the kinds of things you write down with algebraic operations and standard mathematical kinds of things.
And so that term was kind of the starting point. And then the question was, “Okay, so if we're using programs to talk about the world, what kinds of programs might represent things in the world?”
And so that got me into the question of, “Okay, so we just look at the universal possible programs. What's it like?” And so, usually we write a program. We got lots of trouble. We're going to write a piece of a word processor. We're going to write some program where we know what it's for, and it's a big, complicated program.
The question is, if you just think about programs kind of in the wild, sort of, programs that are natural programs, programs that are just--if you just started a numerating program at random, little tiny programs that it's just like, “We get to this program, and what does it do?”
So I had assumed that if you had a very simple program, it would just do very simple things. But it isn't true. And this was the big thing that I discovered in the early 1980s was if you just look at the, sort of, universal possible simple programs, you very quickly find ones that are really complicated in that behavior. The program itself is tiny.
Guy Kawasaki: Can you give an example of this
Steve Wolfram: My favorite example is this thing called “rule thirty,” which is a thing that just operates on a row of black and white cells, and at every step it just says, “Make the color of a cell be some fixed rule based on the color of that cell on step before on the color of its two neighbors.”
So it's very simple thing. You can write it down and, yeah, it's my favorite science discovery, so it's on my business card. So it's just a little tiny, tiny…
Guy Kawasaki: That and doctor.
Steve Wolfram: And doctor, yes, right. No, actually the doctor is not on my business card, only credit cards. You see, in business, doctor is a term of disrespect in some ways.
Guy Kawasaki: Okay.
Steve Wolfram: But in any case, rule 30 is a very simple rule. You can state it, kind of--you can say it in a sentence, although it's kind of a boring sentence that involves “ands” and “ors” and things like that, but then you started off from just one black cell and you see what it does, and it makes this incredibly complicated pattern.
And that, for me, is kind of the--was a sort of a major kind of “aha” moment, discovering that. I mean, I kind of, it's sort of--you look in this computational universe of programs, and you're finding a phenomenon that you absolutely didn't expect. I mean, for--people would think: simple programs, simple behavior.
We're used to--when we build things with engineering, if we want to make something complicated, it takes us a lot of effort. We have to have complicated plans. It's got a lot of global complicated components and so on, but in the computational universe of all possible programs, that's not the way it works.
There are lots of programs that are really simple to construct, but do really complicated things. And why does one care about that? Well, because that's basically what nature is doing, and that's how nature makes complicated things because it's not under the same constraint that we're under.
When we do engineering, we have traditionally been under the constraint that we, kind of, have to be able to foresee what our engineering system is going to do. We're not just saying, “put this together,” and it'll lose something. Nature ends up with things where it's just, “put this together,” and it'll do something.
It's not under a constraint of being understandable from the outset. And so that's, in this computational universe, that’s the sort of big discovery is that there's a lot of complexity that's easy to get, and it seems to be the same, sort of, essential idea that nature uses to make complicated things.
It's also something we can use for technology. When you have this very simple program and it does this very complicated thing, sometimes that complicated thing will be something that we humans find useful. Like even my rule thirty, solid automaton thing is--we used it as a random number generator for a long time because it's a very simple process, but generates something, which for all practical purposes, seems random.
So you would never have, if you were saying, “I'm going to invent a random number generator,” you would never have come up with this. But once you find it, you can say, “let's mine it from the computational universe and use it for that.”
It's kind of like in ordinary physical technology, we find liquid crystals and then we say, “gosh, this is a cool, scientific physics phenomenon or something.” Oh, well, actually, we realized we can use that to make displays,” for instance, and it's the same kind of thing in the computational universe. You find these programs and they do remarkable things, and then sometimes they do things which we humans find useful for things that we want to do.
Guy Kawasaki: What is computational paradigm? Or is this the same thing?
Steve Wolfram: Well, it's kind of thinking about things in computational terms. So thinking about--given a question, sort of, trying to formulate it in, with the kinds of thinking that you could, kind of, talk to a smart computer about. So, I don't know. I'm looking at--you have this giant display on a wall of a rhinoceros, okay, so I'm thinking about, “how do I make that rhinoceros computational?”
Okay, so I'll give you an example. It's got a couple of horns at the front, right?
So one question would be, “what's the diversity of rhinoceros horns?” So how do I think about the space of shapes of rhinoceros horns? And that's something which we can then, sort of, start engaging with computationally.
If that's the signature of a rhinoceros, is the shape of its horn or something, then we can say there's the space of shapes and maybe they're actually a sub breeds of rhinoceros that are sort of separated from other ones cause their horns are different shape, and so on.
And this is kind of how you start, sort of, engaging with some question about the world kind of computationally.
Guy Kawasaki: Okay. Does all of this mean we're a simulation? I mean, are we kidding ourselves here?
Steve Wolfram: Well, I think that this whole simulation argument thing, it's kind of charming how some of these kind of sort of theological, religious ideas and so on, sort of get reiterated in these very different bizarre, different wrappings so to speak.
And it's kind of like, if we look at our universe--so one thing about our universe, this is a very, almost theological, fact about our universe, which is that it has definite laws. Might not be the case. Might be the case that there are ten to the ninetieth particles in the universe. They might all do their own thing.
That might be no order to the universe. And so it was--the first thing that is surprising, in a sense, and that sort of early theologians made a lot of hay out of, was the universe is an orderly place. Doesn't need to be an orderly place. We don't know why it should be an orderly place, but it is.
But when it comes to thinking about what would it mean if the universe was a simulation, it's a philosophically wrong idea, but it takes a few steps to explain why, why that's the case. I mean, basically, if the universe has definite rules, then the universe--it's just running. The universe is just doing what it does and running according to those rules.
Now, if you ask where those rules come from, well, the universe has the rules of the universe has. Now, you can say, “why does it have those rules?” For example, why doesn't it have much more complicated rules?
Why does it have--we don't actually know how simple the rules for the universe are. We don't know. If we could write a program that would reproduce the universe, we don't know if that program is a million lines long, a quintillion, quintillion, quintillion lines long, or three lines long. I think there's a possibility that it's three lines long.
And so that's why, actually, I'm just about to launch a serious effort. I've been interested in this for decades but I'm finally ready—plus, I'm getting so old that I have got do this now or never get to it—to actually make a serious, sort of, assault on, “can we find the fundamental theory of physics?” Can we find the fundamental theory of our universe?”
And it might be the wrong century to try this, but if it turns out the, sort of, rule for the universe is simple, it’d be pretty embarrassing if we have the technology now to find it, so to speak. Pretty embarrassing if we just didn't--hadn't bothered to look for it, so to speak.
Now, it might not be simple enough to find. It might be that there are things about validating, whether it's correct. That there are things we can't do yet with the current state of our science, so to speak.
But anyway, so, imagine that we have the rules for the universe. Imagine we succeeded. We've got the rules. We can write them down. I could tell you them in a few sentences.
For example, and you say, “okay, that's the universe.” Now you say, “well, what do we conclude from that? What would it mean for the universe then to be a simulation?” It's like you say, “well, how--what are those rules running on?”
Well, they're not running on anything. They're just rules that describe how universe works. It's not like you have to take those rules and put them on a computer and run those rules. These are just rules that describe how the universe works.
I told you this is philosophically--a little bit complicated, but essentially, the other side of this is to say, when we look at these rules, “do these rules somehow feel like they're an artifact? Do they feel like they were sort of produced by some intelligence on purpose, making these rules? Or are they just rules our universe happens to have?” And so this then goes into the question of how can you tell if a thing was made for a purpose.
And that's another huge can of worms. So, even when we look at something like, I don't know, Stonehenge, for instance, we say, “what was Stonehenge for?” Okay, well, that's not culturally that far away from modern times, but it's still really hard for us to tell what it was for.
If you're presented with some extraterrestrial radio signal and we say, “is this for a purpose? Is it an intended thing or is it just some natural process that's producing this thing?” Really hard to tell in the abstract.
And that's kind of why--that's kind of one reason there isn't even a meaning to saying: The rules for the universe--where they made on purpose? Where they made, sort of, as a thing by some other entity that was then--we exist as a simulation with respect to that entity? it doesn't really quite make sense.
Guy Kawasaki: Okay…
Steve Wolfram: All right. Sorry. That was--it's a difficult--it's a complicated area. I mean, this is kind of…
Guy Kawasaki: Do I dare ask what's God's role in this story if there is a God?
Steve Wolfram: Well, that's a--It's a funny question because, as I was mentioning, in early theology, the very fact that the universe is orderly was seen as evidence for the existence of God.
Guy Kawasaki: Yes.
Steve Wolfram: Okay. Now, I don't think--but then in a sense, if the universe has just--you have these rules and you run them, that in a sense…It doesn't leave any place for a God, so to speak, because it's just like, it's running a program. There's no God needed to run this program. It's just a program and it's running.
Right? And there are no miracles that come from the outside, so to speak.
Guy Kawasaki: Maybe God was the programmer.
Steve Wolfram: Well, right. So then that comes back to the question of why this universe and not another one, right? And that is something that I can't—first, we have to find what the rules for our universe are, which we haven't succeeded in doing.
And as I say, “maybe wrong century to try and find them.” If we find them, then that really is a kind of in-your-face question: Why these rules? Why not other ones? What can we conclude from this? Is that evidence for something, sort of, beyond our universe? Is it just what we happen to get? We happened to get this universe, not another universe?
I don't know. And it's a curious question. I mean, when people, oh, I don't know, like Isaac Newton back in the 1680s, was talking about--figured out a bunch of stuff about the planets and emotional planets.
And he was, at that time, he was talking about, “Well, once the planets are set in motion, then it's kind of just a matter of mathematics to decide what will happen.” But how the planets started off, that he said, “that's the hand of God,” so to speak, “that determines that.”
Now today, and it always used to be the case, people would say the fact that we had nine planets--well now it's eight, but used to be nine—it’s one of these random facts about the world, not something you could ever derive from mathematics or something like this. It is now interesting that now that we know about zillions of exoplanets, we actually know that it is a derivable fact that our solar system will have about that many number of planets and the distribution of sizes will be about what it is and so on.
So in Newton's time, he had no choice but to say, the way it started, it's just God, so to speak. And a few hundred years later, we can see it as a more general category of thing, namely all these different solar systems and we can come back and look at ours and say, “actually, we kind of understand this one.” I don't know if we find the fundamental theory of physics. I'm sure there will be interpretations on many sides about what that actually means in these terms.
Guy Kawasaki: Okay, fair enough. What is your reaction to the refutation of science these days, particularly at the highest levels of our government. It seems like truth and science and facts--it seems to be it's whatever you agree with is true. So how do you…?
Steve Wolfram: I think one of the things--science is in some ways its own worst enemy in that regard because what's happened is there are things that science has done a really good job of establishing. There are things where there is science that can be said about them, but it's kind of overreached in some way or another.
And people then get suspicious because people would say, for example, “evolution is the whole story of biology,” people would say. Well, it's not. There’s other things going on, and some of them even I figured out in the, sort of, science of, like, how complicated forms arise in biology, which is something where people say, “it must just be evolution because evolution is all there is.”
Well, actually, evolution on its own can't explain why there are complicated forms. That's a consequence of, actually, it's a sort of computational phenomenon that is similar to this rule thirty phenomenon, where the rules can be simple. The behaviors of the forms that are produced aren't simple. So it's a place where when people just say, “it's evolution. That's all there is.”
If you say there's anything else to biology other than evolution, you're wrong. That's an overreach, so to speak, in the sense that, actually, there is something else going on, which is nontrivial science. And it's not that the other thing that's going on is not science.
It's absolutely science. It's actually, probably, cleaner science than evolution in many ways, but it's something where people sort of say, “we've got one piece of science, and so let's carry that all the way,” so to speak. And I think, the other thing that happens, there's an important phenomenon, not yet well understood, although I did happen to testify for a Senate subcommittee a couple of months ago and now this term is now in the congressional record.
So I don't know what that means about it, but the term is ‘computational irreducibility,’ and that term, what does that mean?
So you say, “well, I know the rules for how some system behaves.” So then you'd say, “well, then I can figure out everything about what the system does.” Well, actually, that's not really true because you might run the system for a billion steps, and it takes--you have to go through all those billion steps.
The question is: Can you figure out what's going to happen in the system more efficiently than just running those billion steps? Or do you just have to run all those steps and see what happens? And that's important when it comes to, oh, I don't know, predicting the climate or something, right?
You have to--the question is: can you just say, “oh, this is the answer,” or do you have to, kind of--does this computational irreducibility phenomenon that means it's sort of an irreducible amount of computational work that you have to do to figure out what's going to happen and where it's very hard to make sort of simple, “oh, it's just going to do this,” type claims. And I think one of the things that that happens is people will say, “Oh, we have this science. We know something about this scientifically.”
So then we will just sort of take that particular idea in science and take it to its end conclusion. And because it's science, that must be the whole story. But, actually, it's not. You just got one particular piece of the science and you forgot about other parts, particularly this phenomenon of computational irreducibility, that means even though the equations for things about fluid in the atmosphere and so on, even though that, it doesn't mean it's easy to tell what's going to happen.
And I think that, what has tended to happen, is that people have said, “science is, in many ways, the new religion,” so to speak, in the sense that people say, “we believe in science,” but, in some ways, what they believe in a version of science that, actually, is not well informed by things like computational irreducibility, they believe in a version of science that sort of has simple cut and dried answers to things.
And other people say, “look, common sense tells me this can't be right.” And, they're right; It's not right. It's a piece of the story. And there are certain places where that story comes through for science spectacularly. But there are other ones, including some of these most controversial ones, whether it's in medical area, or in climate, or these kinds of things where it's not so obvious. It's not something where I think it's a cut and dried story.
Now people can say awfully silly things about--people can come up with crazy conclusions. I mean, I'm not arguing that all the things that are said that are sort of on the other side from the cut and dried science makes sense.
But it isn't true that the cut and dried science is really the whole story. So I have a lot--I feel that, actually, it's a thing where science has done itself a considerable disservice by trying to make things seem cut and dried and simple when they're actually not. And people are rightly, some people, are rightly suspicious and say it can't be the whole story.
And then there's a whole attack on, sort of, science as a whole, which is also unfair, but it's, I think, a more complicated picture. And I think that there's some things where it's--I was kind of wince when I hear these, these sort of, “science has proved that blank, blank, blank,” when it's like, “I know how the science works. You can't possibly have proved that from this!” It's a much more complicated story and there are things you can say, but there are also a lot of kind of footnotes and caveats and so on.
Guy Kawasaki: But, how about the other extreme where, “I know the earth is flat”?
Steve Wolfram: Yeah. Right. So I'm saying there are--what you have to do, it’s like, you have to be a more sophisticated consumer of science.
Guy Kawasaki: Okay.
Steve Wolfram: It's no good to say, “it's the same thing has happened in past years. Probably with religion.” You could be an unsophisticated consumer of religion and just say, “okay, the earth is 6,000 years old and it says that in the Bible and therefore it must be true,” right? There's more to religion than that statement, so to speak. And you have to take from it what you--what actually makes sense and not just take the most obvious things.
And I think this is a place where, actually, in this whole, sort of, computational thinking thing, I think people will--as that finally gets well absorbed, some of these ideas, some of these pieces of intuition like computational irreducibility will become quite commonplace.
I mean, it's just, like, we talk about--you're talking, I don't know, probably in marketing and things like that, about force and momentum and acceleration, things like this. These are all terms which were basically invented by people like Newton and Galileo to describe physics. They are terms that come from science, that have found their way into our, kind of, general way of thinking about the world.
There are similar kinds of ideas that come from the, sort of, computational way of thinking about the world, like computational irreducibility, and as we really start to absorb these, we get to have a more nuanced way of thinking about these kinds of scientific questions.
Guy Kawasaki: Okay. So the first time we met, I don't know, twenty, thirty years ago, I came out of that meeting and I said to my wife, “That is the smartest person I have ever met in my life.” I just could barely keep up with you.
And so my question, I have some, sort of, off the wall questions, is number one: You're kind of off the scale on intelligence, and do you ever have a moment you say, “God, I just--I wish I was just kind of a regular person. I'm not thinking about all this kind of stuff. I'm not thinking about irreducibility and thinking about whether it's a simulation. I'm just drinking beer, watching football.”
I mean, do you ever have moments like that?
Steve Wolfram: I don't like either beer or football.
Guy Kawasaki: Cricket?!
Steve Wolfram: No, I'm not into Cricket either. But, look, I don't think of myself, I guess for me, people tell me, “Oh, you're so smart about this or that thing,” but to me, that's not what it feels like to me.
I'm always trying to figure out things that are difficult for me to figure out. Now, maybe some of those things are really, really difficult for some other people to figure out. But for me, I'm always kind of--I'm always struggling to figure stuff out so it doesn't…The, kind of, the internal perception is not one of kind of—I mean, the fact that I'm, I'm always trying to figure stuff out.
Guy Kawasaki: You feel it's a burden?
Steve Wolfram: No, absolutely not. I mean, I like what I do. I like figuring stuff out. I think that the one thing I've been, actually, in recent years, I've been really interested in the question of sort of how does one find talents in the world?
And I've been, kind of--I've spent a lot of time building a great collection of people at the company and things, but I've also been kind of…I like interacting with talented people, and I saw--I suppose one thing that I've noticed is that, like, I interact with kids a bunch and we have various programs for high school kids and things like this. One of the things that I have noticed is that it's surprisingly difficult to, sort of, break out of the elite bubble that one tends to live in. And it's, like, there are probably kids out there who would really benefit from exposure to all this stuff, and it's like, “I don't have connection to any of these.”
And it's also the case that I sometimes worry that I have a particular way of thinking about stuff and that it may not be the way that ‘kid in place X’ wants to think about things and am I serving as kind of a missionary for my ways of thinking about things? And is that the right thing to do? And so on.
But so, that's one place where, I suppose, it is the fact that I've sort of lived in this kind of elite bubble, I sometimes find a little bit frustrating because I'm curious. That means I'm sort of curious about what the rest of the world is like, so to speak.
Guy Kawasaki: Well, let's suppose that you go surfing, or let's suppose that you go to a hockey game. Are you sitting there thinking about the math and the physics of hockey and surfing and, “this is the…”
Steve Wolfram: See, these are things I don't do because I don't find them, well, surfing I’d be useless at, but I, for example, I'm not into sports at all because I'm not into playing it. I'm not into watching it. Why?
I don't know because, perhaps, because I don't know enough about it to know why I care, so to speak, but for me, I like figuring stuff out. I like building stuff, and those activities don't satisfy those particular, for whatever reason, the drive that I have to do those particular kinds of things.
Guy Kawasaki: Okay. Two last questions. Okay. Question number one: is who's the smartest person you ever met?
Steve Wolfram: Oh gosh. I don't rank people by ‘smart’ because you've got to realize, the question of--if it was the case that everything that came up, somebody else could figure out. And then everything you think about somebody else's there in front of you, so to speak, able to figure it out.
Then in a sense you'd say, “okay, that's a smarter person than then whatever.” And, I have to say in my own life, ever since I was in kindergarten, I happened to go to a kindergarten in Oxford, England, where there are a bunch of smart kids.
And so I've always had this thought, eventually I'm going to find--go to this place where I'm going to just find all these people who are fundamentally smarter than me. And I went through different kinds of places: the fancy universities, Silicon Valley, this, whatever. And I never had that moment where I said, “gosh, I finally found the place where everybody is smarter than me.”
Guy Kawasaki: This is a little bit of a ‘high quality’ problem…
Steve Wolfram: Right? But, actually, it's a little bit disorienting to realize, “gosh, there's no place, there's no sort of…” You asked about sort of heroes, there's no place where it's like, it's kind of like, “so I have got figure this stuff out because it's not like there are other people out there who are going to figure all this stuff out.”
If there are things that I think about where they're kind of difficult for me, it's like if our species is going to figure them out at this time, I'm kind of “it” in terms of doing that. But no, I think--I don't like to--the thing I've noticed, and I noticed from leading people a lot over the course of years, that this kind of, “who's smarter than who,” is really not the point because there's so many different ways in which people can be thinking about things.
And somebody can be super good at analytical, figuring out of stuff and super useless at actually, kind of, conceptualizing what to do, for example. They're great--give them a specific problem, they do a great job. If something goes wrong with that problem and that they have to, like, take a turn somewhere, they completely can't do that. They just don't have the initiative and the creativity to do that.
Guy Kawasaki: And my last question, what do you want your legacy to be?
Steve Wolfram: Huh? Whoa. I don't know. I think it's an interesting question. Now that I'm getting old, I've got to…I'm supposed to think about questions like that.
I think there are things that I've done, particularly understanding computational universe, building this computational language. These are things that if nothing sort of dreadfully derails, I think I can confidently say that both of these things will end up being of long-term importance.
I think it's sort of a good question for me. There are things, for example, on the side of science and thinking about the computational universe, there are things that inexorably will happen and that I can jump up and down and tell people how important it is and so on. And maybe that will make it happen some number of years earlier, but these are things which inevitably, inexorably, this is the direction that science and so on will go in.
And I've already seen that over the last couple of decades, and so I suppose that those are, and it's hard to know what the--I have to sort of deconstruct what the concept of a legacy really is. So that's terrible.
That's not what one's supposed to say. It's kind of like there's the kind of genetic legacy, and I've got four kids. Hopefully they'll do interesting things, and then there's the kind of the ‘intellectual legacy’ of things I figured out that might not have been figured out for a lot longer and in our history, although they might eventually had been figured out. And then there are things where I created things where they were sort of created the way they were created because I happen to do them, so to speak.
I mean, when you do science, in some sense, you will never be--there's never been a thing you can uniquely contribute. All you can do is perhaps accelerate the process because the world is the way the world is, and eventually it's going to be found out. When you do things like, I don’t know, writing, or creation, computational language, actually, there are things which are more creative acts where there's sort of an infinite number of possibilities and the one that you happen to choose.
If it ends up being something that survives, that's something that's more a kind of personal imprint on the world than something which, in a sense, inevitably gets discovered at some point.
Guy Kawasaki: Okay. Thank you. This is awesome!
Steve Wolfram: This was fun!
Guy Kawasaki: Oh, my God…
Steve Wolfram: That was a good set of questions. It got me thinking about all kinds of…
This is the longest episode of Remarkable People yet, but tell me, what would you have cut? In another ten or twenty years, maybe I'll interview Wolfram again. In the meantime. We learned how he tried to revolutionize cricket by rolling the ball.
I'm Guy Kawasaki, and this was the Remarkable People podcast. My thanks to Jeff Sieh and Peg Fitzpatrick.
In the next episode, I'm interviewing Margaret Atwood, author of The Handmaid's Tale and fifty-nine other books. Until next time, this is Guy Kawasaki.