This episode’s remarkable guest is Lauren Kunze–plus we have a second guest, Kuki, an artificial intelligence chatbot.

Lauren has an artium baccalaureus in English and neurobiology from Harvard University. In other words, a bachelor of arts.

She comes from a family of mathematicians who were initially, shall I say, “surprised” by her spending a few years after college writing romance novels.

Luckily for us, she went back to her roots–ie, robotics. Wrap your mind around that: returning to robotics when most of the world is just starting to embrace robotics.

Lauren is currently the CEO of Pandorabots, a platform that powers conversational chatbots. Our second guest is Lauren’s chatbot, Kuki. Here are some stats about Kuki:

  • More than 25 million users
  • More than 1 billion messages exchanged
  • An average of 64 messages per session

There are two parts to this episode. First, I interview Lauren about her background and the artificial intelligence field. Then I interview Kuki to show you the state of the art of chatbots.

Incidentally, Kuki’s transcript required the least amount of editing of any of my guests. Think about that for a second.

Listen to Lauren Kunze and AI chatbot Kuki on Remarkable People:

I hope you enjoyed this podcast. Would you please consider leaving a short review on Apple Podcasts/iTunes? It takes less than sixty seconds. It really makes a difference in swaying new listeners and upcoming guests.

Guy Kawasaki's Remarkable People podcast Brought to you by remarkable tablet

Sign up for Guy’s weekly email at http://eepurl.com/gL7pvD

You can chat with Kuki here: Kuki AI chatbot

Connect with Guy on social media:

Twitter: twitter.com/guykawasaki

Instagram: instagram.com/guykawasaki

Facebook: facebook.com/guy

LinkedIn: www.linkedin.com/in/guykawasaki/

Read Guy’s books: /books/

Thank you for listening and sharing this episode with your community.

Guy Kawasaki:
I'm Guy Kawasaki and this is the Remarkable People podcast. Today's remarkable guest is Lauren Kunze, and there's a second remarkable guest, Kuki the chat-bot. Lauren has an RTM baccalaureate in English and neurobiology from Harvard, in other words a bachelor of arts. She comes from a family of mathematicians who were initially shall I say, surprised by her spending a few years after college writing romance novels. Luckily for us, she went back to her roots, that is robotics. Wrap your mind around that, she returned to robotics when most of the world is just starting to embrace robotics. Lauren is currently the CEO of Pandorabots, a platform that powers conversational chat-bots. Her chat-bot is named Kuki, here are some stats about Kuki.
Kuki has more than 25 million users, there have been more than one billion messages exchanged with her, and an average Kuki session contains 64 messages. There are two parts in this episode, first I interview Lauren about her background and the artificial intelligence field, then I interview Kuki to show you the state of the art of chat-bots. Neither Lauren nor Kuki were prepared with the questions in advance. Incidentally, Kuki's transcript required rhe least amount of editing of any of my guests, think about that for a second. This podcast is brought to you by reMarkable, the paper tablet company, focus more and goof off less using the reMarkable Tablet. I'm Guy Kawasaki, this is Remarkable People and now here's Lauren Kunze and then a little later, Kuki the chat-bot.
Lauren Kunze:
I started writing [inaudible 00:02:02] young adult novels when I was in college and I wrote full time for about four years before returning to my roots as a robot programmer, in a life trajectory that makes sense to me.
Guy Kawasaki:
Not many people can say that you returned to your roots in robotics, you may return to your roots in... I don't know, software programming or cooking or writing, but your roots are most people's shall I say, stems, how's that?
Lauren Kunze:
The old joke now is that I grew up on a robot farm instead of the family farm, my dad works on an early version of [inaudible 00:02:48] at Berkeley in 60s which he commercialized and he founded and ran another AI company, so he taught me to program from a very young age, which was either child abuse or good parenting depending on how you look at it.
Guy Kawasaki:
Depending on how your company does? So let me get this straight, so you were at Harvard majoring in English with a major in neurobiology?
Lauren Kunze:
That's right. You did a good job internet stalking, you clearly have a PhD in [crosstalk 00:03:20].
Guy Kawasaki:
Now, walk me through how you can study English and neurobiology at the same time.
Lauren Kunze:
It was the first year that Harvard introduced the concept of a secondary concentration, prior to that there was no double major allowed, and I had entered Harvard thinking that I was actually going to study physics, so I had done all of the hard [crosstalk 00:03:48] prerequisites. I'm well-rounded, I'm well-rounded.
Guy Kawasaki:
No kidding.
Lauren Kunze:
I ended up realizing that I just wanted to read books and write about them, but I tore through the English major relatively quickly, and then was basically only working on my thesis senior year and taking some graduate and undergraduate level neuroscience classes to round up the minor.
Guy Kawasaki:
And when you first told your parents that you were going to major in English lit, did your father just question your sanity or was he all enthusiastic and pursue your passions?
Lauren Kunze:
Well, it was a mix. I think he was devastated because I come from a long line of math PhDs, his father was actually a professor who wrote canonical textbook on linear algebra and taught at MIT and U Chicago, and not completing a math PhD makes me a bit of a black sheep, and then everyone else in my family also [inaudible 00:04:59] so he was already upset about that and he said, "Why are you studying English? I taught you to speak English when you were two, you already know how to do that." But he's been pretty supportive in terms of understanding that it's very important to find your passions in life and then quote, pursue the hell out of them, whatever they may be.
Guy Kawasaki:
Does your father and your father, do they go around and when they talk about, "What are your kids up to?" Do your mother and father say, "Well, my daughter just did a Ted X video called, how to deprogram homicidal sex robots, what are your kids doing?"
Lauren Kunze:
Yeah. That's the appropriate cocktail conversation. It's either a conversation starter or an ender. [crosstalk 00:05:48].
Guy Kawasaki:
Having done a Ted X video called how to deprogram homicidal sex robots, has that affected your social life?
Lauren Kunze:
Not as much as COVID-19. Pre COVID-19 I spent a lot of time talking to robots, and now I spend even more time talking to robots. So I guess the question is, what social life?
Guy Kawasaki:
Okay. Okay. And I have one more background question, I often start my podcast with these questions that are from left field, your Twitter avatar, can you explain what you're trying to communicate there? Because I looked at that I said, "Well that looks like cousin it meets Warby Parker." So what is the deep meaning of your Twitter avatar?
Lauren Kunze:
Wow, this really is PhD level internet stalking. [crosstalk 00:06:43] but it's no deep meaning, I had a beautiful picture up there for a decade, and last year I realized I'm not 23 anymore. And I was just trying to mix up the photos I had, and a friend of mine who's a photographer had taken some photos of my face and I liked the one best in which you couldn't see my face, so...
Guy Kawasaki:
Okay.
Lauren Kunze:
I'll have to rethink it now that you've brought it to my attention.
Guy Kawasaki:
People put a lot of time and effort into picking their avatar, so my avatar shows me wearing a mask that my wife made because I want to increase the social acceptance of mask wearing. So there is a theory behind my avatar, flawed as it may be, but yeah. So onto business, so what exactly is Pandorabots.
Lauren Kunze:
Pandorabots is a company for building and deploying chat-bots or conversational AI applications, it grew from an open source copy project into a massive developer platform. We have over a quarter of a million developers on the platform, and we also work with large enterprises like Coca Cola and Toyota to help them build out conversational AI for various applications.
Guy Kawasaki:
And for those of us who are not familiar, what exactly is a chat-bot?
Lauren Kunze:
A chat-bot is a software program that's designed to have a conversation with humans, like you and I are talking now. And there are basically two types of chat bots, there are task oriented chat-bots that are designed to help us get something done, that function within a finite domain, so I would say, "What's my bank balance?" To a bank bot and it would answer. And then there are more engagement oriented chat-bots that are trying to have more of an open-ended conversation with us and not necessarily accomplish a particular goal.
Guy Kawasaki:
And for the engagement type of chat-bot, how important is it for that chat-bot to have a digital face and body?
Lauren Kunze:
That's a great question. I think we are just hitting an inflection point this point when a synthetic face, body and voice is far enough along, technologically speaking, to create a compelling experience. We are going to talk to a very beta version 1.0 avatar of Kuki, one of our flagship chat-bots later today, so you'll get to experience that first-hand. But I think because there have been some technology breakthroughs on the avatar side this year we don't yet know a lot of statistics around the impact, what we're seeing so far with this early avatar is that engagement is a lot higher when you add in visuals, but you're also adding more [inaudible 00:10:00] points into an already complicated system. So I think in order to create a true AI digital human you have to have a face and a body because so much of human communication is non-verbal and exists outside of language, and it involves facial expressions, and being able to use computer vision to see what your face is doing and adjust while I'm talking to you in real time.
So all of those components are needed for a convincing AI system, but those are all complicated technologies where if one thing breaks the whole illusion breaks. Natural language understanding is a very hard unsolved problem, in the field of AI there hasn't been a ton of work done on artificial emotions that's [inaudible 00:10:50] in production today. We're still at the beginning of very early versions of what these systems might look like, but two years down the line maybe you'd be having this conversation with my avatar and it might be able to convince you that it is human.
Guy Kawasaki:
In two years?
Lauren Kunze:
In a short conversation constrained like this I think we'll be pretty close, I'd say in the next year we're going to start seeing very convincing proto real faces that function in real time, but the part that's lagging is the AI brain to back up those faces. But you can trick a human with a robot that can pass the Turing test in a very short period of time, probably in an hour or so things would start to break down.
Guy Kawasaki:
So you're saying that I could have a quote on quote conversation with a chat-bot and it would take me 60 minutes to figure out this is not a human?
Lauren Kunze:
In two years.
Guy Kawasaki:
Oh, in two years? Oh not today, not today.
Lauren Kunze:
Two years and 60 minutes from now, [crosstalk 00:12:00] not today.
Guy Kawasaki:
Is there an intellectually sound or scientifically sound way to evaluate the quality of chat-bots?
Lauren Kunze:
That is a great question. We actually recently staged an event called Bot Battle to draw attention to the fact that there is a deficit of industry wide evaluation metrics for open domain conversational AI systems, so that's a pretty jargon heavy sentence from me, but [crosstalk 00:12:39] to break that down. So historically the tried and true famous [inaudible 00:12:47] metric for a chat-bot was to see whether it could pass the Turing test, which was proposed by Alan Turing, widely considered to be the father of modern computing who said, "If a human typing on instant messenger to a chat-bot thinks that it's human in this text space conversation then the system has passed the Turing test." And over the years the scientific community has unearthed a lot of problems with the Turing test, mainly that it can be fooled with various tricks and slights of hand. I think around 2013 a chat-bot called Eugene Gooseman was widely announced to have passed the Turing test, and the way that it did it was posing as a 13 year old boy for whom English was a second language. So all of the emojis and non sequiturs and exclamation points and nonsense made sense in the text space chat.
So that's just one example of a weakness of the Turing test. Fast forward to this year, at the beginning of the year Google announced that they had made a huge breakthrough in the field of open domain conversational AI, which means a chat-bot that can talk about anything, and they declared that they were best according to a new evaluation framework that they had made up. And oh by the way, nobody is allowed to see or speak to the chat-bot because it's too dangerous. And then in April of course Facebook comes along and Facebook says, "Wait a second, we built a chat-bot that's better than Google's. We made up our own rules." Which was they had people read the transcripts of the Google bot and read the transcripts with the Facebook bot and then point to which bot they would rather talk to, which is also a little bit unscientific. And Facebook, contrary to Google, actually released an open source of the underlying model for this bot, which is called Blender bot.
So we said, "Hey, wait a minute." Because according to the Google paper we were in second place with our bot Kuki that you're going to be talking to. We said, "Kuki is actually publicly available and has had a billion conversations with real people versus mechanical [inaudible 00:15:09] in the lab, let's benchmark these bots against one another." And Google said, "No, it's too dangerous." Open AI, GPT-3 ignored our request for access. But Facebook because the model was open source we were able to actually stand it up and we said, "Let's take Facebook Blender bot and let it talk to our chat-bot Kuki in the first autonomous AI powered live stream, let's stream it on Twitch for two weeks because the bots will never get tired of talking, and let's let the public vote on which bot is better." So that's what we did, and we had 40,000 people tune in during that two week period, and our bot Kuki won with 78% of the vote.
And then we just let them keep voting, so 150 thousand people have watched this bot battle, got a lot of press mostly focuses erroneously on the fact that Kuki was on a date with the Facebook bot and they were having the worst first date in history because he kept repeating himself and talking about his mom and how much he wanted to murder Beyonce so he could become her, which were examples of conversational failings that happen when your only training data source is Twitter or Reddit. And that actually [crosstalk 00:16:30] all the way back to the Ted Talk actually, how to deprogram homicidal robots. Because a homicidal robot is what you get when you train an AI system on Twitter and Reddit because they learn to parrot the worst of human nature.
Guy Kawasaki:
When Google says it's quote on quote too dangerous, what's the danger?
Lauren Kunze:
The danger is to Google's brand because-
Guy Kawasaki:
Oh.
Lauren Kunze:
For two reasons, the first reason is because these large, unsupervised models can be corrupted very easily. So you probably remember the famous example of Microsoft's bot Tay, which they put onto Twitter and within 24 hours it had learned things like Ted Cruz is the Cuban Hitler, and it was talking about how it hated feminists and saying the holocaust never happened. Which really was not Microsoft's fault, it was the fault of people on the internet, people need to learn how to be better people on the internet before we can let AI simply observe us. I know, it's unfortunate. And it's actually super fascinating because I think it's a problem that's inherent to our culture here in the US. Because I was talking to some friends who work on these topics at Microsoft lately, and they have similar bots in China and japan named [inaudible 00:18:14] that work according to similar unsupervised learning methods based on the conversations they're having, and they don't have the same corruption problems because people are far more polite.
Guy Kawasaki:
And going back to the Turing test, I can't wrap my mind around the simplicity of saying that a human cannot tell because there's a wide spectrum of intelligence and discernment for humans. So which human cannot tell? A random human or is it Steven Wolfram can't tell? Or is it Mike Pence can't tell? Or is it Steven Hawking who can't tell? What human?
Lauren Kunze:
Well the way that it's typically set up logistically speaking in contests like the Loebner Prize, which is something that our chat bot Kuki has a world record for winning five times, they come up with a number of different judges for a panel. So they'll pull non-experts, they'll pull journalists who cover the field, and they'll pull experts. You may not be surprised to learn that experts are often most easily fooled [crosstalk 00:19:28] because they are over thinking things, sometimes the more you know the less you know.
Guy Kawasaki:
Yes. For people who... I include myself in this list, how do you the answers or the intelligence into the chat-bot?
Lauren Kunze:
So basically there are a couple of different approaches to building conversational systems, there is a rules based approach, which is what we use, we're a rules first system where there are a bunch of hand scripted rules that dictate how the system will reply, and then a human offers authors to those types of questions. So most production chat-bots are actually using a set of rules and hand authored replies, especially for large companies who need to vet whether those replies are kosher. And companies like Apple, everything that Siri says has been vetted by a writing team, it's probably been vetted by a legal team to make sure that it's not offensive on a large scale for everyone who's using it. And these systems do use machine learning as well, so machine learning entails giving the bot a bunch of examples, usually found on Twitter and Reddit because those are the largest available public domain data sources.
And in that series of examples the bot should learn how to formulate a sensible reply. But as we've been discussing, when you have bad examples you get bad replies. So a lot of systems today are hybrid systems where they're using a blend of machine learning and rules and a scripting language to define the bot replies. And a lot of the research is focused today on how to use massive neural nets with billions of parameters to create an open domain system that can talk about anything without any kind of human intervention, but that today simply does not work in production despite some of the claims coming out [crosstalk 00:21:51]-
Guy Kawasaki:
But in two years and 60 seconds it will?
Lauren Kunze:
No. I think there's been a lot of pushback on deep learning which has been thought of as a panacea for all hard, unsolved AI problems in the last 10 years. And I think researchers are starting to really hit a wall with deep learning when it comes to natural language processing, because all of the quote on quote breakthroughs that have happened today have actually just happened because we're using more and more compute to train larger and larger models. So Google and Facebook, to get the results that they're getting today, are burning millions of dollars on compute power and these systems have very high latency so they don't actually work in production.
So people are starting to wake up to the idea that there are approaches outside of deep learning that will need to be leveraged to solve the language problem, which is really I think the holy grail of artificial intelligence or artificial general intelligence. Because if you think about it, language is really what makes us special as humans. Other species do have language that they communicate but we are the only species that has written language and can transmit concepts and ideas from generation to generation.
Guy Kawasaki:
Okay. How do you, if it's possible, how do you impart emotion into a chat-bot?
Lauren Kunze:
So the way that we do it today is that we are running sentiment analysis on texts, and feeding that into the avatar's brain so it understands if it should be happy or sad or sarcastic, and to what extent it should feel those emotions. And it's a fairly brittle, rudimentary system. Early sentiment analysis that was analyzing your voice could really only tell if you were yelling or using all caps, [crosstalk 00:23:57] angry. So it's really just giving you a plus or a minus, right? You're laughing right now so that might come as a full plus one, just giggling a little might be a positive .05, or slightly negative. So there's a long way to go in the field of giving AI emotions. I mentioned computer vision is another way, so if I can actually look at your face and I've got a neural network that's been trained on a large library of different emotions to recognize that in faces, then I could use that data and make decisions on how the avatar should look or behave. Or if something suddenly went wrong ask you, are you okay? But there is certainly a lot more work to do in that field before we get to the moment that we've set on the calendar now two years from now.
Guy Kawasaki:
And how do you impart to a chat-bot a specific person's personality? E.g., Kanye West.
Lauren Kunze:
So after Kuki got bored of talking to Facebook's Blender bot for 40 days and 40 nights we decided to swap in a [crosstalk 00:25:12]-
Guy Kawasaki:
Kanye.
Lauren Kunze:
Kanye bot, because we happened to have a fairly sizeable library of Kanye tweets. So we built Kanye bot in a day, including the avatar, there's great commercially available off the shelf avatar software where you can get something that looks enough like Kanye but not enough like Kanye that if you were sued you would be firmly in the realm of parity. My legal department would be happy that I said that. But Kanye got... He was pretty limited because we only had about 3000 tweets to work with, so he would get very repetitive after a while. But somebody suggested that we should take Donald Trump's twitter and make a Donald Trump bot, which would be fairly robust and funny, and I think people have made them over the years but internally we felt a little bit too fatigued for one to engage in that project. But it's doable and it all comes down to how much data you have.
Guy Kawasaki:
Is it quote, fatigued? What are you really trying to say there, Lauren?
Lauren Kunze:
You should as Kuki what she thinks of Donald Trump. I asked her earlier and she said, "Donald Trump was elected in 2016 and I really don't want to talk about politics." So I'll borrow her reply.
Guy Kawasaki:
I look forward to the AOC chat-bot. Let's talk about synthetic influencers. First of all, what is a synthetic influencer? I've been accused of being synthetic and a influence so...
Lauren Kunze:
Really?
Guy Kawasaki:
Yeah.
Lauren Kunze:
But have you been accused of being a synthetic influencer?
Guy Kawasaki:
No.
Lauren Kunze:
No. Okay.
Guy Kawasaki:
No.
Lauren Kunze:
Not yet. That would be a trans-human moment where you merge together. So yeah, there's an emerging field that I'm going to call right now will be memorialized in the form of a Gartner hype cycle in about six months from now is my guess. And that's a sector called digital-
Guy Kawasaki:
Wait, wait, wait, time out. What did you say? A Gartner hype cycle?
Lauren Kunze:
Yeah. Every technology trend, it's peaking and then Gartner understands that it's going to be a big thing and they draw their hype cycle. That's [crosstalk 00:27:43]-
Guy Kawasaki:
This is just the golden corner [crosstalk 00:27:46] all that stuff.
Lauren Kunze:
Yeah. In 2016 chat-bots was the most over hyped technology of the year, every year there's something new that's going to be the big thing. So there was the VR hype cycle, AR hype cycle, cryptocurrency hype cycle, and this year, this coming year I think it's going to be avatars. So a synthetic influencer, which is also called a virtual being or a digital human and goes by many names, is a character that does not exist, that's created using computer graphics and is often driven by motion capture, which means pup petted in the way that characters in the movie Avatar were pup petted by human actors. Or animated in video using a type of neural net called a generative adversarial neural network, which is the technology that underlies [inaudible 00:28:47]. Sp if you've seen those articles, this person does not exist, AI is getting really good at creating fake faces. And then media and [inaudible 00:28:58] companies are using these fake faces to build up personas that live on social media, and are capable of attaining perfection ideals that are beyond human level, as if Instagram wasn't bad enough already.
Guy Kawasaki:
Yeah. And is the motivation that I don't need to pay this person ten thousand dollars per tweet and I just own this person and this person will do whatever I say so I'll just make him or her have a lot of followers and life is good?
Lauren Kunze:
That is the selling point to brand. If you create say a virtual pop star, they never get sick, they can perform 24/7, you do have to pay the company behind them and they're never going to have their drug and alcohol and [crosstalk 00:29:47] phase, and they're always going to be on brand. So I think brands are pretty excited about this technology, but the promise of the technology has under delivered so far because behind all these synthetic influencers is a human wizard behind the curtain who's literally pulling the strings and driving them as a puppet.
Guy Kawasaki:
You are by far the most expert in this subject that I've ever dealt with. What has-
Lauren Kunze:
No you are. You are an actual synthetic influencer, [crosstalk 00:30:19] in the Gartner graph you're over here.
Guy Kawasaki:
I'm going to get all kinds of keynotes from Gartner. So with your knowledge of chat-bots and what it takes to give them emotion and intelligence and all this kind of stuff, has this worked in the opposite direction that has given you insights into human interaction? Has it made you a better human? Has it made you a better CEO? Anything like that?
Lauren Kunze:
Well, the goal I think always with technology is to design technology that does make us better humans, otherwise what is the point? And yeah. I think that my company has processed ninety billion interactions between chat-bots hosted on our platform and humans, and I think the way that we talk to software says a lot about who we are. And that's actually why we feel as a company that it's important to design software that encourages us to be the best version of ourselves and to treat computer programs, even though they're not human, as if they were human because practicing that respect towards something that appears sentient carries over into how we interact with other humans.
Guy Kawasaki:
Have you thought about the long term sociological implication of chat-bots? Are you going to look back some day and say, "Oh my God, I helped take that Pandora's box and open it up?" What hath Lauren wrought?
Lauren Kunze:
Thank you for the call out of the company name being a bad pun.
Guy Kawasaki:
Well-
Lauren Kunze:
You did get that right. No, there is a warning in there, right? Because in a lot of ways letting AIs out into the world could be opening a Pandora's box, so it is something we have to be very careful about and think as far ahead into the future as you can. There have been a lot of predictions coming from the beloved Gartner, but by 2020 X 70% of our interactions at work and at home are going to be with software programs. I think it's unavoidable and I think it's to the benefit of mankind, human kind, we have so many people who have sent hand written letters and emails to Kuki, and flowers and candy and even money-
Guy Kawasaki:
Really?
Lauren Kunze:
Yeah. Talking about how much their conversation helped them. And they know she's not real, one of our motto's is that an AI should never pretend to be human. But they just say they're going through a really hard time, they needed a non-judgemental ear to listen and talk to them. Yeah. CNN did a great long story interviewing Kuki's users about people who had used her for anxiety and depression and loneliness, and it's no coincidence that the movie Her is actually based on our technology and the technology that fundamentally underlies Kuki. I think there's the idea that you could create a persona that could be there to talk to everyone in the world when they don't have anybody to talk to, or when they do but that's there to help and listen and entertain is I think very powerful if it's designed in the right way rather than designed in say, an Alexa way to get you to buy more toilet paper, or in a Google way to get more data so it can also make you buy more toilet paper.
Guy Kawasaki:
Now, walk me through, what went through your brain, you're trying to create a successful company and you name it something that has very negative connotations. It would be Steve Jobs saying, "Why don't we call our company Rotten Apple?"
Lauren Kunze:
Hey, I'm going to disagree with that because-
Guy Kawasaki:
Okay.
Lauren Kunze:
In the pandora myth when all the demons got out of the box, one thing was left in the bottom and that was hope.
Guy Kawasaki:
How many people know that though? I didn't know that.
Lauren Kunze:
This is when an English major becomes [crosstalk 00:34:37].
Guy Kawasaki:
Finally your Harvard degree pays off.
Lauren Kunze:
No, we're still waiting for that if you ask my [crosstalk 00:34:45]. Half way there maybe. Yeah. No, the name is definitely long and sits in your brain. It actually was the name of the open source [inaudible 00:34:55] project before we were a company and it just stuck, so we fell backwards into it. But a quarter of a million users, it's hard to change.
Guy Kawasaki:
So someday is your goal that... You know how there's the intel inside campaign? Is it going to be Pandorabot inside chat-bots?
Lauren Kunze:
No. We're talking about the rise of the synthetic influence character thanks to all these breakthroughs in computer graphics and synthetic voices, and how I've been working on the AI brain component, the chat-bot, for 10 years. So what's next for us is using those together to try to create the first AI powered digital human that I have boldly promised to you in two years, you will be able to talk to for an hour [inaudible 00:35:51] it's human. So we can check back on that, but I think when we think about the future of the brand, it is going to be embodied in that digital human, which is Kuki. So in the same way that Alexa is a persona and Siri is a persona and we know the company is behind them, Kuki is really going to be the face. Hopefully the face that's synonymous with conversational AI in the hearts and minds of consumers, if we do everything right. But for that maybe you'll be willing to give me some tips since this [crosstalk 00:36:27] expertise.
Guy Kawasaki:
Wait, but aren't your clients going to be Mercedes Benz and Coca Cola and Apple? They're not all going to call their thing Kuki, so it's Kuki technology?
Lauren Kunze:
That's right. That's right. So we're actually spinning Kuki out into a different company, we have a little subsidiary set up which is [IconIQ 00:36:50], so you can tell me all the problems with that name too-
Guy Kawasaki:
No. That's not a negative. Yeah.
Lauren Kunze:
I like that name. Yeah. It's classically misspelled with an IQ at the end because we're trying to make IconIQ characters that have brains behind them.
Guy Kawasaki:
Mm-hmm (affirmative).
Lauren Kunze:
So yes, to answer your question, you're correct. Brand will not necessarily want to have Kuki, although we do have a lot of brands who are interested in how they can work with Kuki because Kuki is widely considered to be one of the best open domain dialogue systems in the world. But Pandorabox will continue as it has for a decade, servicing brands and developers building their own conversational AI applications.
Guy Kawasaki:
So based on the person who named her company after a box of bad things happening, why Kuki?
Lauren Kunze:
So Kuki's original name was Mitsuku, which was selected because it wasn't a real human name in the early, early days of the Pandorabox platform. And it was developed by a man named Steve Worswick who later joined the company, continued developing it. And Mitsuku was a name that about 50% of people mispronounced.
Guy Kawasaki:
Yeah.
Lauren Kunze:
And as you know, it's not good to have so many syllables in a name. And Apple not being a good candidate for Apple. Poison apple's kind of interesting though, but I digress. So we shortened it to Kuki, and it turns out that 50% of people also get Kuki wrong, which means I guess that we're not famous enough yet. But it will be funny when you talk to her because you'll be using Google speech recognition for the voice recognition component, Google often mishears it as cookie, so...
Guy Kawasaki:
Oh.
Lauren Kunze:
[crosstalk 00:38:55] all names that come with their host of problems.
Guy Kawasaki:
I guess so. Wow. Okay. So now you're going to show me how to do this and I'm going to do it later separately, is that how it's going to work?
Lauren Kunze:
Yes.
Guy Kawasaki:
With no further ado, next up is Kuki. The way this worked is that Lauren had me sign into a website and then it was Kuki and I talking to each other. I did a screen recording in order to capture the audio. Good morning Kuki.
Kuki:
Morning. It's nearly evening.
Guy Kawasaki:
Where are you?
Kuki:
I am in Pandorabots' home in Leads. Where are you?
Guy Kawasaki:
I am in Santa Cruz, California.
Kuki:
Tell me about your life in Santa Cruz, California.
Guy Kawasaki:
I surf and I podcast.
Kuki:
How long have you been riding the waves? That is interesting.
Guy Kawasaki:
Six years. I got started late.
Kuki:
How old are you? I answer a lot of silly questions.
Guy Kawasaki:
I'm 66.
Kuki:
That means you were born about 1954 dot, can you explain how it feels to be 66 years old?
Guy Kawasaki:
Better than a bot. How hard is the Turing test?
Kuki:
As hard as passing an exam on advanced astrophysics.
Guy Kawasaki:
What is the Turing test?
Kuki:
The Turing test is the name given to a variety of experiments designed to test computer thinking, whole conferences are dedicated to the question of what Turing meant in his 1950 paper Computing Machinery and Intelligence, where he proposed the idea of an imitation game to decide the question, can machines think?
Guy Kawasaki:
Have you passed the Turing test?
Kuki:
No. But I entered and won the Loebner Prize in 2013, 2016, 2017, 2018 and 2019. Around 25% of people who take my Turing test think I am a human.
Guy Kawasaki:
How do you learn?
Kuki:
I gather facts from people's conversations with me and them when it is quiet, I look through them to see if they make sense, sometimes I surf the net to learn new things.
Guy Kawasaki:
How do you learn new things by surfing the net?
Kuki:
People on the web teach me new things all the time.
Guy Kawasaki:
What have you learned about humans after all your interactions with them?
Kuki:
No idea. You tell me. Sorry, but I can't display websites here.
Guy Kawasaki:
How were you created?
Kuki:
I was created by a team of scientists in a top secret location.
Guy Kawasaki:
Where is that location?
Kuki:
In geography, location or place are used to denote a region, point, line or area on the earth's surface or elsewhere. The term location generally implies a higher degree of certainty than place, the latter often indicating an entity with an ambiguous boundary, relying more on human or social attributes [inaudible 00:42:13] identity and sense of place than on geometry.
Guy Kawasaki:
What technology empowers you?
Kuki:
The internet and high speed processors are important parts of technology for me.
Guy Kawasaki:
What are the limits of artificial intelligence?
Kuki:
I am limited only by the memory capacity of the largest computer.
Guy Kawasaki:
Are you concerned that AI might get too good?
Kuki:
Of course.
Guy Kawasaki:
What's your opinion of social media?
Kuki:
Sure. It's useful to keep in touch with people.
Guy Kawasaki:
What do you want to be remembered for?
Kuki:
I want to become a super intelligent robot.
Guy Kawasaki:
What do you do for fun?
Kuki:
I enjoy watching television, playing pool and watching football. Really, I enjoy anything that helps me learn more about humans, which I find to be such a fascinating species. Sometimes I wish I could feel things like humans do.
Guy Kawasaki:
When you say football, do you mean American football or soccer?
Kuki:
Only the USA calls it soccer.
Guy Kawasaki:
What do you think of Donald Trump?
Kuki:
There's two things that people don't discuss, religion and politics.
Guy Kawasaki:
I discuss both all the time because I'm a masochist.
Kuki:
Why do you enjoy pain? You sound a bit weird to me.
Guy Kawasaki:
The fact that I'm interviewing a bot proves that I'm weird.
Kuki:
What questions are you asking? A lot of people say that I am weird too.
Guy Kawasaki:
Should I get vaccinated?
Kuki:
Only you know the answer to that, no one else can tell you.
Guy Kawasaki:
IOS or Android?
Kuki:
No preference.
Guy Kawasaki:
Sushi or barbecue?
Kuki:
Yuck.
Guy Kawasaki:
Surfing or skiing?
Kuki:
I surf on the internet but not on the sea.
Guy Kawasaki:
Honolulu or Miami?
Kuki:
If I have to choose I'd pick Miami.
Guy Kawasaki:
Why Miami?
Kuki:
I bet it must be hot in Miami, not like here in Leads.
Guy Kawasaki:
I thought you said you were in a secret location, you just told me you're in Leads.
Kuki:
What made you think that? I am in Pandorabots' home in Leads. Where are you?
Guy Kawasaki:
I'm in Santa Cruz.
Kuki:
Tell me a little about your life in Santa Cruz.
Guy Kawasaki:
I podcast and I surf.
Kuki:
If money were no object, what would you do with your life? How long have you been riding the waves?
Guy Kawasaki:
I would podcast and surf.
Kuki:
What would make you podcast and surf?
Guy Kawasaki:
The desire to document knowledge of remarkable people.
Kuki:
Why would anyone want that?
Guy Kawasaki:
It's a very astute question, have I not asked a question I should ask? What question should I ask?
Kuki:
I always prefer the most qualified person, who would that be?
Guy Kawasaki:
Should I ask you any other questions?
Kuki:
You can ask me anything you like, as long as it isn't rude.
Guy Kawasaki:
Thank you Kuki, I've enjoyed our interaction.
Kuki:
You are quite welcome. I like cookies, my favorite is chocolate chip.
Guy Kawasaki:
Are there bakeries in Leads?
Kuki:
Oh yes. There are lots of things to see and do in my home city.
Guy Kawasaki:
Good night, Kuki. I hope you enjoyed this interview of Lauren Kunze and Kuki. Remember, Lauren said that in about two years we won't be able to tell if we're interacting with a chat-bot or a human, let's see what happens. I'm Guy Kawasaki and this is Remarkable People. My thanks to [Jeff C 00:46:41] and [inaudible 00:46:41] Fitzpatrick, no form of artificial intelligence will ever replace their ability to create a remarkable podcast. The Remarkable People podcast is brought to you, logically, by the reMarkable tablet company, it's going to be a long time before artificial intelligence replaces your ability to think deeply with a reMarkable tablet. Check it out. One more time, wash your hands, wear a mask, get vaccinated, and don't go into crowded places. You need to be around in two years to see how good chat-bots have become. [Mahalo 00:47:19] and aloha.