Welcome to Remarkable People. We’re on a mission to make you remarkable. Helping me in this episode is Terry Sejnowski.

Terry isn’t just another voice in the AI conversation; he’s a pioneering force who holds the prestigious Francis Crick Chair at the Salk Institute and serves as a Distinguished Professor at UC San Diego. His recent book, ChatGPT and the Future of AI, has revolutionized how we think about artificial intelligence, cutting through the hype to reveal profound insights about machine learning and human cognition.

In this episode, we dive deep into the fascinating parallels between AI and human learning, exploring why the unpredictability of large language models might actually be a sign of sophistication rather than a flaw. Just as children surprise their parents with unexpected brilliance, AI’s ability to generate novel responses could be a marker of true intelligence rather than a limitation.

Terry’s unique insights challenge conventional wisdom about AI consciousness while offering practical perspectives on how these technologies can enhance rather than replace human capabilities. One of his most intriguing revelations? Being polite to AI – using please and thank you – actually generates better responses, mirroring human social interactions in unexpected ways.

From comparing Sam Altman to a fearless mouse to explaining why AI bias might be easier to fix than human prejudices, Terry brings both scientific rigor and witty observations to our discussion. His measured optimism about AI’s future provides a refreshing counterpoint to doomsday predictions, suggesting that we should focus less on whether machines truly “understand” and more on how they can augment human potential in practical, meaningful ways.

Join us for this enlightening conversation that will transform how you think about artificial intelligence and its role in shaping our future.

LISTEN TO THE EPISODE HERE

Please enjoy this remarkable episode, Terry Sejnowski: ChatGPT and the Future of AI.

If you enjoyed this episode of the Remarkable People podcast, please leave a rating, write a review, and subscribe. Thank you!

Follow on LinkedIn

Transcript of Guy Kawasaki’s Remarkable People podcast with Terry Sejnowski: ChatGPT and the Future of AI.

Guy Kawasaki:
Hi everybody, it's Guy Kawasaki, this is the Remarkable People Podcast. And as you well know, I'm on a mission, with the rest of the team, to make you remarkable. And today, we have a really special guest, his name is Terry Sejnowski. And I got to tell you, our topic is AI, and nobody likes AI more than I do. And he has just written a book which I found very useful, it's called ChatGPT and the Future of AI.
Now, Terry has one of the longest titles I have ever encountered in this podcast, so I got to read it here. Terry is the Francis Crick Chair at the Salk Institute for Biological Studies, and Distinguished Professor at the University of California at San Diego. Man, your LinkedIn page must be really something. So thank you very much, Terry, welcome to the show.
Terry Sejnowski:
Oh, great to be here. Thanks for inviting me.
Guy Kawasaki:
There's nothing we like more than to help authors with their new books, so we're going to be making this mostly about your book. And I think that the purpose is people listen to this episode, and at the end, they feel compelled to buy your book. And I'm telling you right now, if you just stop listening right now, you should just trust me and buy this book, okay.
I have a question from left field. So I noticed something, at the end of chapter one, you ask a series of questions to help you understand chapter one. And the tenth question is, let me read, "Who is Alex the African grey parrot, and how does he relate to the discussion of LLMs?" And I read that, Terry, and I said, "Where did he ever mention Alex the African grey parrot?" So I went back, and I searched and searched, and I could not find the pronoun Alex anywhere.
And then so I bought the Kindle version so I could search digitally, and I searched for parrot, and there's one sentence that says, "Critics often dismiss LLMs by saying they are parroting excerpts from the vast database used to train them." So that's the only reference that people were supposed to get. Alex the Parrot, was that a test to see how careful people read?
Terry Sejnowski:
First of all, it's in the footnotes, in the end notes at the end of the book.
Guy Kawasaki:
Right.
Terry Sejnowski:
So it's in that chapter, if you look at it. And Alex the grey parrot was a really quite remarkable parrot that was taught to speak English. Irene Pepperberg, I don't know if you know her, but she taught it not just to speak English, but to tell you the color of, say, a block of wood, and how many blocks are there, and what's the shape of the block? Is it square, is it circle? It's unbelievable, and it shows how remarkable some animals are. We can't speak parrot, but some of them can speak English, right?
Guy Kawasaki:
So in a sense, it's like when Jane Goodall discovered that chimpanzees had social life and could use tools, right?
Terry Sejnowski:
It's exactly the same. I think humans are very biased against the intelligence of other animals because they can't talk to us. Now, the irony is that here comes ChatGPT, and all the large language models. It's as if an alien suddenly arrived here, and could speak to us in English. And the only thing we can be sure of is it's not human, so if it's not human, what is it?
And now, we have this huge argument going on between the people who say they're stochastic parrots. They're just parroting back all the data they were trained on, without understanding that you can ask questions that were never asked, or never in the world database. The only way that it can answer it is if it generalized from what's out there, or not what exactly is out there.
So that's one thing. But the other thing is that they say that, "Okay, it seems to be responding, but it doesn't really understand what it's saying." And what it has shown us, is that we don't understand what understanding is. We don't even understand how humans understand, so how are we going to say.
Guy Kawasaki:
So in other words, people should give parrots more credit than they might, right?
Terry Sejnowski:
That's for sure. I'm convinced of that. And I think it's not just that, I think it's a lot of animals out there. The orcas and chimps, and a lot of species really are very sophisticated. Look, they all had to have survived in their niche, and that takes intelligence.
Guy Kawasaki:
All right. You will be able to see this more and more as we progress, but I really enjoyed your book, and I did a lot of things that you said to try. So I'm going to give you an example, so I asked ChatGPT, "Should the Bible be used as a text in public elementary schools in the United States?" And ChatGPT says, "Using the Bible as a text in public elementary schools in the US is a contentious issue due to the following considerations."
And I won't read every word, but constitutional concerns, educational relevance, community values, legal precedent. So my question from all of this, is how can an LLM have such a cogent answer when people are telling me, "Oh, all a LLM is doing is statistics and math, and it's predicting what's the next syllable after the previous syllable." It looks like magic to me, so can you just explain how an LLM could come up with something that cogent?
Terry Sejnowski:
You're right, that it was trained on an enormous amount of data, trillions of words from the internet, books, newspaper articles, computer programs. It's able to absorb a huge amount of data, and it was trained simply to predict the next word in the sentence. And it got better and better and better. And here's, I think, what's going on, we really don't know for sure what's going on inside this network, but we're making some progress. Words are ambiguous.
They often have multiple meanings, and the only way you're going to figure that out is the context of the word. That means previous words, what's the meaning of the sentence. So in order to get better, it's going to have to develop internal representations. By representation, I just mean kind of a model of what's happening in the sentence, but it's got to have a semantic information. Meaning, it has to be based on meaning.
It also has to understand syntax, right? It has to understand the word order, and that's very important in linguistics. So all of that has to be used as hints, as clues, as to how to predict the next word. But now that you have it trained up and you give it a question, now, it's got to complete the next word, which is going to be the answer to the question.
And it gets the next word, it's a feedforward network, by the way, but then it loops back to the input. So it now knows what its last word was, and then it produces the second word, and it goes over, over, again and again, until it reaches some stop. I don't know how they program that, because sometimes it goes on for pages, depending on what you asked it to do.
Guy Kawasaki:
I understand what you just said, but every day, it's just magic to me. In a sense, a lot of people are concerned that we don't know how exactly an LLM did that, but then my counter-argument to them would be, "How well do we understand the human brain? That doesn't upset you so much, why is it so upsetting that you don't know how an LLM thinks?"
Terry Sejnowski:
Also, they complain ChatGPT is biased. The same argument that you just gave, it's that humans are biased too. Then I ask, "Okay, do you think it's going to be easier to fix the LLM or the human?"
Guy Kawasaki:
I think we know the answer to that question. I often talk in front of large tech audiences, and AI is often the topic. And when these skeptics come up, and they say that LLM is going to cause the nuclear wars, and all that, I asked them this question. So I say to them, "Let's suppose that you have to have something control nuclear weapons, let's take it as a given we have nuclear weapons. So who would you rather have control of nuclear weapons, Putin, Kim Jong Un, Netanyahu, or ChatGPT?"
And nobody ever says, "Oh, yeah, I think Putin should do it." So last night, I asked ChatGPT this question, and it says, "I wouldn't choose to launch a nuclear weapon. The use of nuclear weapons carry severe humanitarian and environmental consequences, and the global consensus is to work towards disarmament and ensure such weapons are never used." That is more intelligent answer than any of those people I listed.
Terry Sejnowski:
It is remarkable. And the range, it's not just giving sensible answers, it often says things that make me think twice. Also, I don't know if you've tried this, but it turns out that they also are very good at empathy, human empathy. In the book, that I had this little excerpt from a doctor whose friend had cancer, and he didn't know quite what to say, so he got some advice from ChatGPT, and it was so much better than what he was going to say.
And then at the end, he went back to ChatGPT said, "Oh, thank you so much for that advice. It really helped me." And he said, "You are a very good friend. You really helped her." It's like starting to console him. And where does that come from?
Guy Kawasaki:
It's magic.
Terry Sejnowski:
It is magic, but it turns out that human empathy is embedded indirectly, in lots of places where humans are writing about their experiences, or biographies, or just novels where doctors are empathizing. I don't know. No one really knows exactly, but they must be there somewhere.
Guy Kawasaki:
It's kind of blown away the Turing Test, right? You mentioned in your book, this concept of the reverse Turing Test, where instead of a human testing, or computers testing a human, and Terry, I think that is a brilliant idea. Couldn't you have a chatbot interview a job applicant and decide if that job applicant is right for the job better than a human could?
Terry Sejnowski:
I think it would need a little bit of fine-tuning, but I'm sure it could do a good job, and a lot of companies actually are using it. But here's the problem, the problem is that if the company wants the best employee based on all the database from the company of people who have done well and people who haven't, but what if there are some minorities that haven't done very well for various reasons, there's going to be a bias against other minorities.
Well, you can, in fact, put in guardrails and prevent that from happening. In fact, if that is a goal that you have, is diversity, you should put that into the cost function. Or actually. They call it a loss function, but it's really weighting the value of what is it that you're trying to accomplish. It has to be told explicitly, you just can't assume.
Guy Kawasaki:
But if a chatbot was interviewing a job prospect, I would think that the chatbot doesn't care about the gender of the person, doesn't care about the skin color, the person may have an accent or not. There's a lot of things that humans react to that would not affect the chatbot, right?
Terry Sejnowski:
Okay, okay, okay. So actually, I was slightly different. I was giving you the scenario where a company is trying to hire somebody, and they have specific questions they ask. But if you just have an informal chat, you're absolutely right that the large language model doesn't know ahead of time who it's talking to, and it doesn't even know what persona to take, because it can adopt any persona.
But with time, with answering questions, it will get a sense for what level of answer is expected, and what the intelligence of the interviewer is. And there's many examples of this in my book. You could use that, somebody could take that. And then in fact, I even tell people, I said, "Look, here's four people who have had interviews, and I want you to rate the intelligence of the interview and how well it went." It was really quite striking how the more intelligent the questions, the more intelligent the answers.
Guy Kawasaki:
So in a sense, what you're saying is that if an LLM has hallucinations, it might not be the LLM's fault, as much as the person who created the prompts?
Terry Sejnowski:
I would say not. I think hallucinations are a little bit different, in the sense that it will hallucinate when there's no clear answer. It feels compelled to give you an answer, I don't know why, and it will make one up. But it doesn't make up a trivial answer, it's very detailed, it's very plausible. It'll give a reference to a paper that doesn't exist, right? That's really taking a large effort to try to convince you that it's got the right answer.
So hallucinations are really, again, something that humans, people hallucinate. It's not just because they're trying to lie, it's our memory is reconstructing the past. It doesn't memorize things, and it will fill in a lot of blanks with things that are plausible. And I think that's exactly what's happening here. I think that when it arrives at something where it doesn't know the answer, it hasn't been trained to tell you that, right? It hasn't been trained. It could be, but in the absence of that, it does the best it can.
Guy Kawasaki:
You had a section in your book where you asked via these very simple prompts, "Who holds the record for walking from," I don't know, whatever you said, England to Australia, or something, and the first answer was, yeah, they gave a name and which Olympics, and all that. So I went back last night, and I asked a similar question, "Who first walked from San Francisco to Hawaii?"
And the answer was, "No one has walked from San Francisco to Hawaii, as it is not possible to walk across the Pacific Ocean. The distance between San Francisco and Hawaii is over 2,000 miles, primarily over open water. However, many people have traveled this route by airplane or boat." So are you saying that between the time you wrote your book and the time I did the tests, that LLMs have gotten that much better?
Terry Sejnowski:
First of all, that first question was asked by Doug Hofstadter, who's a very clever cognitive scientist, computer scientist, but he was trying to trip it up clearly. And I think that it probably decided it would play along with him, and just give a silly answer, right? A silly question gets a silly answer. And I think that with you, it probably sized you up, and said, "Well, this guy's a lot smarter. I think there's a smart answer."
Guy Kawasaki:
You're saying I'm smarter than Douglas Hofstadter?
Terry Sejnowski:
Your prompts were smarter.
Guy Kawasaki:
I can stop the interview right there. There were just these jewels in your book, and you drop this jewel that it takes ten hours to become a good prompt. I call it baller. So you can be a baller with prompts in ten hours, and that's a thousand times faster than Malcolm Gladwell's 10,000 hours. So can you just give us the gist? People are listening, saying, "Okay, so how do I become a great prompt writer?"
Terry Sejnowski:
It does take practice, and the practice is you learn the ropes, just the way you learn to drive a car, or anything that for what you need skills, or playing tennis, right? You have to know how to adjust your responses and to get what you want. But there are some good rules of thumb, and in the book, I actually have a bunch that I was able to get from people who have had a lot of experience.
And here's one, this is from a tech writer who decided that she would use it for a whole month to write her papers, technical reports. And she said that instead of just having one prompt, or prompt it to ask for one example, you give the question, but you should ask for ten different answers.
And now, what you can do, because otherwise you're going to have to iterate to get to the direction you want to take it. But if you now have ten, you can say, "Ah, the third one is much better than all the others, but I want you to do the following with it," and then that will help it learn to understand what you're looking for. But a bunch of other things that came out of it, which are quite remarkable, was first, she said that she had at the end of the day, really exhausted.
It was just exhausting because you're always interacting with the machine, and it's now always giving you what you want. And so at the end of the day, it was a chore for her, but she said she's going to go on and do it. But then at one point, she realized that I don't have this problem when I'm talking to people, so she started being more polite.
She said, "Oh, please give me this. Oh, that's such a wonderful answer. I really thought that was great." And it perked up, and actually, she said it was just talking to somebody. And if you're polite, you get better answers. And at the end of the day, I wasn't exhausted. I just felt like I just had this long discussion with my friend. Who would've guessed that? That's amazing.
Guy Kawasaki:
Wait, I just want to make this perfectly clear. You're saying if you have those kind of human interactions, human nuances, you get better answers from a machine?
Terry Sejnowski:
Yes. That's her discovery, and that's my experience too. Look, it learned from the whole range of human experience and humans interacting with each other, dialogues, and so forth, so it understands a lot about that, and it will adapt if you put it into that frame of mind, if I could use that term. It will continue to interact with you that way, and I think that's really quite remarkable.
Guy Kawasaki:
I have often wondered, because I write a lot of prompts every day, wouldn't it be better for the LLM if it recognized things like capitalization of proper nouns, or air quotes, or question marks, or exclamation marks, that have these really basic functions in written communication? But it seems like whether you're asking a question or making a statement, the LLM doesn't care. Wouldn't it help the LLM if I'm asking a question, as opposed to making a statement, and Apple is the company not apple, the fruit?
Terry Sejnowski:
Oh, no. If you put a question mark there, it knows it's a question.
Guy Kawasaki:
It does?
Terry Sejnowski:
I can assure you, yes, absolutely. So what happens is that all of the words and punctuation marks are given tokens. In fact, some words have more than one token, like if it's a portmanteau word. And it treats all of those as being hints or giving you some information about the meaning of the sentence, and if it's a question, it's a very different meaning.
So yeah, it will definitely take that into account. At one point, actually not for me, but for someone else, it started giving emojis as output, so it must know what an emoji is.
Guy Kawasaki:
I learn something every day, thank you for clearing that up for me. With this, just the beauty and magic of LLMs, how would you define learning going forward? Because is it scoring high on an SAT, is it memorization of mathematical formulas, or is it the ability to get a good answer via a prompt? What is learning anymore?
Terry Sejnowski:
So it was taught, it was pre-trained, that's the P in GPT, and it was trained on an enormous amount of facts and tests of various sorts, and so it internalized a lot of that. It knows what kind of a question that you're asking because it's seen millions of questions.
This is something still very mysterious. It turns out there's something called learning in context, that is to say if you have a long enough interview, because it keeps adding word after word, it will go off in a certain direction as if it has learned from what you just told it, as if it's building on what you just told it.
And that's of course, what happens with humans. Humans, that you have long conversation, and you will take into account your previous discussion and where that went. And it can do that, and that's another thing that is very strange, is that no one expected that.
The thing is that when they train these networks, they have no idea what they're capable of. Step back a few years before P, these deep learning models, the learning took place in typically, feed-forward networks, and it had a data set, and it was given an input, and it was trained to give an output, right?
And so that is supervised learning. And you couldn't do speech recognition that way, object recognition, language translation, a lot of things, but each network is dedicated to one task. What is amazing here, is you train it up on self-supervised, just to predict the next word, and it can do hundreds and thousands of different tasks. You can ask it to write a poem.
By the way, that's where hallucination is very useful. Haiku, it's not a brilliant poet, but it does a pretty good job, and I have a couple of examples in my book. But it has a wide range of talents, that language capabilities, that again, no one programmed, no one told it. Or to summarize a long document in a paragraph, it does a really good job of that. It's astonishing.
Guy Kawasaki:
But if you think about it, do you have children?
Terry Sejnowski:
I don't, no.
Guy Kawasaki:
Okay, I have four children, and many times, they come up with stuff that I have no idea how they came up with that. So in a sense, you think exactly what your child is learning, and you think you're controlling all the data going into your child, so you can predict what they're going to come up with, and they absolutely just knock you off your feet with something that, "How the hell did you come up with that idea?"
What's the difference between not knowing how your child works, with not knowing how LLM works? Same thing, right?
Terry Sejnowski:
That's actually a very deep insight, because human beings are picking up things in a very similar way, in terms of the way that we take experience in, and then we codify it somehow in our cortex, in such a way that we can use it in a variety of other ways later on. And they could have picked up things they heard, for example, that you and your wife talking about, or it could have been playing with kids outside. And same thing with ChatGPT, who knows where it's getting all of that ability?
Guy Kawasaki:
Okay. I am going to read you, and this isn't a question, this is just a statement here. I have two absolute favorite quotes from your book, this is one of them, quote, "Usefulness does not depend on academic discussions of intelligence." Oh, my god, I use LLMs every day, and they're so useful for me. I don't give a shit what you say about the academic learning model. What do I care, it's helping me, right?
Terry Sejnowski:
It's a tool, and it's a very valuable tool. And all these academic discussions are really a reflection of the fact that we don't really understand. If experts argue about whether are they intelligent, are they understanding, it means that we really don't know the meaning of those words. We just don't understand at any real level, how scientific.
Guy Kawasaki:
Let me ask you something, writer to writer. So if you provided the PDF of your book, and you gave it to OpenAI, and they put it into ChatGPT, would you consider that ripping off your IP, or would you want it inside ChatGPT?
Terry Sejnowski:
I would be honored.
Guy Kawasaki:
Me too.
Terry Sejnowski:
I would brag about it.
Guy Kawasaki:
Me too.
Terry Sejnowski:
I think that there is some concern about the data. Where are these companies getting the data from? Is there proprietary information that they used, and so forth. That's all going to get sorted out. But my favorite example is artists. They say, "Oh, you've used my paintings to train up your DALL-E, or your diffusion model, and I deserve something for that." Then my question is, "When you were learning to be an artist, what did you do?"
Guy Kawasaki:
You copied other artists.
Terry Sejnowski:
You looked at a lot of other artists, and your brain took that in, and it didn't memorize it, but it formed features that then, later, you're depending on all that experience you've had to create something new. But this is the same thing, it's creating something new from everything that it's seen. So it's going to have to be settled in court, I don't know what the right answer is.
But there's something interesting has happened recently. And by the way, I have a Substack, because the book went to the printer in a summer, so there's all kinds of new things that are happening. So in the Substack, what I do, is I fill in the new stuff that's happened, and put it in the context of the book. It's Brains and AI.
But what's happened, is that Mistral and several other companies have discovered that if you use quality data, in other words, that's been curated, or it comes from a very good source, and you may have to pay for it. And math data, for example, Wolfram Research, Steven Wolfram, who founded Mathematica, has actually sold a lot of the math that they have.
But the quality data, turns out that you get a much better language model, much better in terms of being able to train with fewer words and a smaller network, having a performance that's equal or better. So same thing true with humans, right? But I think what's going to happen is that the models will get smaller and they'll get better.
Guy Kawasaki:
Another author to author question, I'll give you a negative example. So I believe back in the seventies, Kodak defined themselves as a chemical company, and we put chemicals on paper, chemicals on film. The irony is an engineer inside Kodak invented digital photography, but Kodak kept thinking, "We're a chemical company, we're not a preservation of memories company."
If they had repositioned their brains, they would've figured out "We preserve memories, it's better to do it digitally than chemically." So now as an author, and you're also an author, I think, what is my business? Is it chemicals? Is it writing books, or is it the dissemination of information?
And if I zoom out, and I say it's dissemination of information, why am I writing books? Why don't I train an LLM to distribute my knowledge instead of forcing people to read a book? So do you think there are going to be authors in the long run? Because a book is not that efficient a way to pass information.
Terry Sejnowski:
Interesting, and this is already beginning to happen. So you know that you could train up an LLM to mimic the speech of people if you have enough data from them, movie stars. And also, it turns out that you can not only mimic the voice, but someone fed in a lot of Jane Austin novels, and I gave a little excerpt in the book.
You can ask it for advice, and it will start talking as if you're talking to Jane Austin from that era. And there's actually interesting, potentially important way, if you have enough data about an individual, videos, writing, and so forth, if that could all be downloaded, you're writing into a large language model, and in some ways, it would be you, right? If it has captured all of the external things that you've said and done, so it might, who knows?
Guy Kawasaki:
Terry, I have a company for you. There's a company called Delphi.ai. And Delphi.ai, you can control what goes into the LLM. So Kawasakigpt.com is Delphi AI, and I put in all my books, all my blog posts, all my Substacks, all my interviews, including this interview will go in shortly. So you can go to KawasakiGPT, and you can ask me and 250 guests a question, and I promise you that my LLM answers better than I do.
And in fact, since you talked about Substack, every week, Madisun and I put out a Substack newsletter. And the procedure is we go to KawasakiGPT and we ask it a question, like "What are the key elements of a great pitch for venture capital?" And five seconds later, we have a draft. And we start with that draft, and I don't know how we would do that without KawasakiGPT. That ability to create an LLM for Terry is already here, and Delphi AI has this great feature that you can set the parameters.
So you can say very strict, and very strict means it only uses the data you put in, or it can be creative, and it can go out and get any kind of information. If somebody came to TerryGPT and asked, "How do I do wing suiting?" If you had it set to strict, assuming you don't know anything about wing suiting, it would say, "This is not an area of my expertise. You're going to have to look someplace else." Which that's better than hallucination, right? You've got to try that.
Terry Sejnowski:
I will. I had no idea. And is this open to the public?
Guy Kawasaki:
Yes. And I pay ninety-nine dollars a month for this, and you can set it, so it subscribes to your Substacks, subscribes to your podcast. You can make a Google Drive folder, and whenever you write something, you drop it in the drive, and then it keeps checking the drive every week, and just keeps inputting. And I feel like I'm immortal, Terry, what can I say?
Terry Sejnowski:
It really has a transformative potential. Who would've guessed that this could even be possible a couple of years ago? No one. I think it really is a transition.
Guy Kawasaki:
But just before you get too excited by this, I don't think that there is a market for people's clones, because I'm pretty visible, and I only get five to ten questions a day. It's a nice parlor trick, "Oh, we can ask Guy what he thinks about everything." But after the first hour, six months later, are you going to remember there's KawasakiGPT? I doubt it.
So what you would probably go to is ChatGPT, and say, "What would Guy Kawasaki say are the key elements of a pitch for venture capital?" And ChatGPT will give you an answer almost as good as KawasakiGPT, and you'll never go back to my personal clone again.
Terry Sejnowski:
Yeah, I think your children might, someday.
Guy Kawasaki:
Why would that be true, they don't ask me anything now.
Terry Sejnowski:
Oh, that's interesting. A lot of times, when someone dies, they're offspring and close friends, say, "Oh, I wish I had asked them that question. It's too late.” No, if they have something, you're there to ask the question.
Guy Kawasaki:
Okay, I did it for my kids.
Terry Sejnowski:
Yeah.
Guy Kawasaki:
I'm going to get a little bit political on you right now. It seems to me, that people can try this who are listening, go to ChatGPT, and ask, "Should we teach the history of slavery?" Ask questions about should we have a biblically-based curriculum in public schools? Go ask all those kind of questions, you're going to be amazed at the answers.
So my question for you is, don't you think that in the very near future, red states, or let's say a certain political party, they're going to block access to LLMs? Because if LLMs are telling you, "Yes, we should teach the history of slavery," I can't imagine Ron DeSantis is wanting people to ask ChatGPT that question.
Terry Sejnowski:
So now we're getting into hot water here.
Guy Kawasaki:
You're tenured, right?
Terry Sejnowski:
It's not just ChatGPT, we're talking about all of these high tech websites, that repository of knowledge and information that you can search. They have a devil of a time trying to figure out should they have thousands of people actually doing this. They're constantly looking at hate, things that are said on Twitter, or whatever, that has to be scrubbed. Now the problem is who's scrubbing it? What do they consider bad?
And if humans can't agree, how can you possibly have a rule that is going to be good for everybody if there isn't any? I think it's an unsolved problem, and I think it's reflecting more the disagreements that humans have, than the fact that ChatGPT can't decide what to say. But it's interesting, you could probably push it in certain directions. I think that people have tried that. They've tried to break it one way or another.
Guy Kawasaki:
It may be that many Republicans have never tried a LLM, but I'm telling you, if they tried it, they would say, "LLMs are woke, and we got to get all this woke stuff out of the system." I can't imagine.
Terry Sejnowski:
Okay. My guess is that you'll get a woke person talking, and coming to conclusion that this is flaming a conservative here. In other words, you can push it around. It can do either. It has the world's literature on both sides. And this is exactly the problem, is that it is reflecting you. It's a mirror hypothesis, reflecting your kind of stance that you're taking. And in some ways, it has the ability, like a chameleon, right? It'll change its color depending on how you're pushing it.
Guy Kawasaki:
That's no different than what people do in a conversation.
Terry Sejnowski:
That's right. And also, people are polite, they generally stay away from things that are controversial. And yeah, we need that in order to be able to get along with each other, right? It would be terrible if all we did was argue with each other all night.
Guy Kawasaki:
About a year or two ago, I won't prejudice your answer, there was this idea that we would have a six-month kind of timeout while we figure out the implications of AI. Is that the stupidest thing you ever heard? How do you take a timeout from AI? Let's like timeout, and figure out what we're going to do.
Terry Sejnowski:
That was done by, I think, 500 machine learning and AI people that decided that, in their wisdom, that we have to. You're right it was a moratorium. And I think it was specifically on these very large GPT models, that we shouldn't try to train them beyond where they are because they might be super intelligent, and they may have actually take over the world and wipe out humans. This is all science fiction, right, that's what we're talking about.
And in the book, I came across an article on the Economist, where they had super forecasters who had a track record of being able to make predictions about catastrophic events, wars, and technologies, nuclear technology, better than the average person.
And then they also compared the predictions with experts. And it turns out that experts are a factor of ten times more pessimistic in terms of whether something's going to happen, or when it's going to happen, than the super forecasters. And I think that's what's happening, is that they think that their technology is so dangerous that it needs to be stopped.
Guy Kawasaki:
When I read that section of your book, I had to read it about two or three times, because it's exactly opposite of what I thought it would be. That super forecasters would be Armageddon, and the technical people would say, "No, it's okay." How do you explain that?
Terry Sejnowski:
There's a simple explanation. I think though, that everybody thinks that what they are doing is more important than it might be in terms of its impact. Actually, this is funny. When Obama was elected president, he said that he was going to support science, and that was wonderful.
And so the newspaper asked, "What areas of science do you think the government should support?" And almost every person said "What I'm doing, it's the most important area." Because they're the closest to it, and of course, they've committed their life to it, so we must be the ones that do it.
Guy Kawasaki:
Okay. I mentioned that I had two absolute gems that I loved as quotes in your book, and I'm coming to the second one. And the second one is not necessarily a quote, but I want you to explain the situation when you say that Sam Altman had, shall I say, symptoms of Toxoplasma gondii, the brain parasite that makes rodents unafraid of cats and more likely to be eaten. So why did you say that about Sam Altman?
Terry Sejnowski:
Okay, so first of all, this is a biological thing that happens in the brain of the poor mouse or rat. So there was a time when he would go to Washington, and not just testify before Congress, but he would actually go and have dinners with Congress people and talk to them. And the history is that Bill Gates gets pulled in, and he gets grilled in the congressional testimony, and they have an aversion.
So here's this guy who's going in, and not just going for testimony, but actually going and trying to be a part of their social life. It just seemed that he was being contrary to the traditional way that most humans would deal with people who are out to regulate you.
But actually, somewhere later in the book, I think I identified another explanation, which is that the regulation is an interesting thing because it basically puts up barriers, right? It turns out if you have lots of lawyers, you can find loopholes. There's always a loophole, right? And if you're rich, you can afford lawyers to find the loopholes for you.
And of course, the big corporations, high tech, Google and OpenAI, they have the best lawyers. They can hire the best lawyers to get around any regulation, whereas some poor startup, they can't do that. So it'll give the big companies an advantage to have regulations out there.
Guy Kawasaki:
But couldn't a scrappy, small, under-capitalized startup ask an LLM, "What are the loopholes in this regulation?" It would find them.
Terry Sejnowski:
Ah, okay. Well, so now you're saying that in fact, they could use their own, because they're not going to be able to make their own LLM, they're going to have to use the big ones that are already out there. And it could be that these companies are actually democratizing lawyers. By the way, it's not just lawyers and laws, it's also reporting. In other words, there's a tremendous amount of what they want to do is somehow, if the companies have to have tests and lots of examples.
Like the FAA, before the airplane is allowed to carry passengers, it's got to go through a whole series of tests, and very stringent. It has to be put into the worst weather conditions, to make sure it's stressed, a stress test. And again, all of that testing is basically for a large company to have lots of resources to do that.
And it may not be easy for a small company, so it's complicated. But in any case, I think that what's happening right now, is that the Europeans have this AI law that is one hundred pages, with very strict rules about what you can and cannot do. Like you can't use it for interviewing future employees for companies.
Guy Kawasaki:
Wait, we just advocated for that.
Terry Sejnowski:
Yeah. We'll see what happens in the US, because right now, it's not proscriptive, it's suggestive that we follow these rules.
Guy Kawasaki:
And what would be the thinking that you can't use it to interview employees in Europe? What are they worried about?
Terry Sejnowski:
Oh, bias.
Guy Kawasaki:
Bias, as opposed to human bias, like a male recruiter falls for an attractive female candidate?
Terry Sejnowski:
Okay, that's also a bias, I guess. There probably is some law there, I don't know. Not only are we biased, but we're biased in our biases, who we talk to and things like that.
Guy Kawasaki:
All right. I got to tell you one more part I really loved about your book is when you had the long description of legalese, and then you had the LLM simplify a contract, and that was just beautiful. Why do terms of service have to be so absolutely impenetrable? And you showed an example of how it could be done so much better.
Terry Sejnowski:
That is happening right now, I think, in a lot of places. And this is a big transformation that's occurring within companies now, the employees are using these tools in order to be able to help, first of all, keep track of meetings, so you don't have to have someone there taking notes because the whole thing gets summarized at the end of the meeting. It's really good at that and speech recognition.
Guy Kawasaki:
And you also mentioned that when doctors are interviewing patients, that instead of looking at the keyboard and the monitor, they should be just listening, and let the recording take care of all that, right?
Terry Sejnowski:
Yes. That's a huge benefit, because looking at the patient, it carries a lot of information, their expressions, the color of their skin, all of that is part of being a doctor. And if you're not looking at that, you're not really being a good doctor.
Guy Kawasaki:
Okay, this is seriously my last question. I love the fact that the first few chapters, at the end, they had these questions that probably ChatGPT generated. Why didn't you continue that through the whole book, so every chapter ends with questions?
Terry Sejnowski:
I don't know. I hadn't thought about it. I'll tell you; I wrote the book over the course of a year. I do use it throughout the book. I have sections, and I actually set them apart, and say, "This is ChatGPT." At the end, there's this little sign, OpenAI sign. And I ask it to summarize parts. And sometimes I ask it to come up with, say, five questions from this chapter, and that's where Alex the parrot popped out.
Guy Kawasaki:
Am I the first person to catch the fact that Alex the parrot was not mentioned in the text, except for the footnote?
Terry Sejnowski:
You are the first person, and I suspect there are others that notice that. But actually, it's good to have a few little morsels in there that you have to, a little detective story. Who is Alex the parrot?
Guy Kawasaki:
All right, I really want you to sell a lot of copies of this book, so give us your best shot promo for your book.
Terry Sejnowski:
Everything you've always wanted to know about large language models and ChatGPT, and were not afraid to ask.
Guy Kawasaki:
That's a good positioning. I like that. It's like that book way in my past. It was a book called Everything You Wanted to Know About Sex: But Were Afraid to Ask, right? As I learned from Steve Jobs, you got to learn what to steal. That's a talent in and of itself.
Terry Sejnowski:
You're paying homage to the past. But I'll tell you, I wrote this for the public. I thought that the news articles were misleading, and all of this talk about super intelligence was, although it's a concern, it's not an immediate concern, but we have to be careful, that's for sure. And it helps.
I'm trying to help people. When I give talks, they ask, "Will I lose my job?" And I say, "You may not lose your job, but it's going to change. And you have to have new skills. And maybe that's going to be part of your new job, is to use these AI tools."
Guy Kawasaki:
Well, as you mentioned in your book, when we started getting farm equipment, there were a lot less farmers. You could manage thousands of acres with one person, right?
Terry Sejnowski:
Yes, that's true. But the children went to the cities, and they worked in factories, and so they had a different job. But it wasn't working to get food, it's working to make cloth, and automobiles, and things.
Guy Kawasaki:
And LLMs eventually.
Terry Sejnowski:
Yes. Eventually, for some of us.
Guy Kawasaki:
Alrighty. I just want to thank you, Terry, very much. I found your book very, not only interesting and informative, there were places where I was just busting out laughing. And I'm not sure that was your intention, but when I read that thing about Sam Altman's brain has that thing that make rodents less afraid of cats, I'm like, "Oh, my God, this guy is a funny guy." So one of my theories in life is that a sense of humor is a sign of intelligence.
Terry Sejnowski:
Oh, good. Okay. Actually, I'll tell you, if this is interesting, who gets the Academy Awards? It's the actor who's in some terrible drama, where something bad happens, and so forth, and then they overlook all the fantastic comedians.
It turns out it's much more difficult to be a comedian than to be somebody who has angst, and they're not give the same respect. I had no idea that you've read the whole book, because most of the people who interview me, they've read some parts, but you, it sounds like for the whole book. It's amazing.
Guy Kawasaki:
Do you know the story of the chauffeur and the physicist?
Terry Sejnowski:
No.
Guy Kawasaki:
Okay, this is along the lines of what you just said, that I read the whole book. So this physicist is on a book tour, let's say it's Stephen Wolfram or Neil deGrasse Tyson. So anyway, they're on this book tour, and they're going to make four stops in this city, and the chauffeur takes them from stop to stop. So the chauffeur sits in the back and listens to the first three times the physicist gives the talk.
At the fourth time, the physicist says, "I am exhausted. You've heard me give this talk three times, you go give the talk." And the chauffeur says, "Yeah, I can do it. I heard you three times." Chauffeur goes up, gives the talk, but he ends early. And so the emcee, the host of the event says to the chauffeur, "Oh, we're lucky we ended early. We're going to take some Q&A from the audience."
So the first question comes up, and it's about physics, and the chauffeur has no idea. And he says, "This question is so simplistic, I'm going to let my chauffeur sitting in the back answer." So I'm your chauffeur.
Terry Sejnowski:
Oh, that's wonderful.
Guy Kawasaki:
All right, Terry, thank you.
Terry Sejnowski:
Well, thank you.
Guy Kawasaki:
I truly enjoyed this.
Terry Sejnowski:
I did too, this was great.
Guy Kawasaki:
All right, all the best to you. Thank you.
Terry Sejnowski:
Okay, good luck.
Guy Kawasaki:
Bye-bye.
Terry Sejnowski:
Take care. Bye-bye.