Welcome to Remarkable People. We’re on a mission to make you remarkable. Helping me in this episode is Sandra Matz.

She’s revolutionizing our understanding of how digital footprints shape our personalities and futures.

Sandra isn’t just another academic; she’s a thought leader at Columbia Business School who combines psychological insights with big data to reveal surprising truths about human behavior. Through fascinating examples—from how companies predict your personality through social media posts to how the use of first-person pronouns might indicate emotional distress—she illuminates the hidden patterns in our digital lives.

In our conversation, Sandra shares startling insights about data privacy that go far beyond the usual discussions of targeted ads. She explains why the information we casually share online could have serious implications in an uncertain future, drawing powerful parallels to historical events that demonstrate why data privacy isn’t just about convenience—it’s about security.

But Sandra isn’t just about identifying problems. She introduces innovative solutions like federated learning and data co-ops that could revolutionize how we handle personal information. She also provides fascinating perspectives on political messaging and psychological targeting that reveal how campaigns can speak to different audiences while promoting the same policies.

Whether you’re a technology enthusiast, privacy advocate, or simply someone who uses social media, this episode will transform how you think about every click, purchase, and search you make online. Sandra’s insights aren’t just remarkable—they’re essential for understanding and protecting ourselves in our digital world.

LISTEN TO THE EPISODE HERE

Please enjoy this remarkable episode, Sandra Matz: The Personal Data Privacy Crisis.

If you enjoyed this episode of the Remarkable People podcast, please leave a rating, write a review, and subscribe. Thank you!

Follow on LinkedIn

Transcript of Guy Kawasaki’s Remarkable People podcast with Sandra Matz: The Personal Data Privacy Crisis.

Guy Kawasaki:
Hi. I'm Guy Kawasaki. This is the Remarkable People Podcast. As you well know, we are on a mission to make you remarkable, and the way we do that is we bring you remarkable guests who explain why they're remarkable, and how they're remarkable, and their remarkable work.
Today's special guest is Sandra Matz. She's a professor at the Columbia Business School, and she's going to talk to us about psychological targeting and changing mindsets. Hold your book up, Sandra. Let's get this book out of the way here.

Sandra Matz:
It's in the back here.

Guy Kawasaki:
We want people to buy that book. All right?

Sandra Matz:
I don't even have it with me, but it's in the back. It's yellow and blue. Hard to miss.

Guy Kawasaki:
All right. Congratulations. Shipping a book is a big, big accomplishment. Trust me, I know this firsthand, so.

Sandra Matz:
I know it feels good when it's out, even though I had a great time writing it. So I think I probably enjoyed it a lot more than what I was told by other authors, so I really enjoyed the process.

Guy Kawasaki:
Yeah. Well, I have written seventeen books, and I have told people sixteen times that I am not writing another book.

Sandra Matz:
Good luck with that one.

Guy Kawasaki:
Yeah. Exactly.

Sandra Matz:
We're already placing the order for number eighteen if that's the case.

Guy Kawasaki:
Alrighty. So, first of all, if you don't mind, let me just tell you something off-the-wall that your story about how you met your husband at that speaking event, that was the closest thing to porn in a business book that I have ever read.

Sandra Matz:
I spared you the details. There's actually a lot more to this story. It's a good one.

Guy Kawasaki:
Well, I mean, I was reading that. I said like, "Man, where is this going? Is she going to have this great lesson about how to tell men to stick it and get out of my face?" Then, I keep reading, and it says, "Oh, and the night went very, very well." Like, "What?"

Sandra Matz:
It's such a fun anecdote in my life, how I met him. So it was just like conference. He was late for the people who haven't read the book, and I was like, "What a jerk," and I had written him off. Then, as the night progressed and I learned more about him by spying on him, but I was like, "Interesting guy. I think I'm going to give him a second chance." We're married. We have a kid now who's one year old. So it all worked out.

Guy Kawasaki:
Is he still meticulously neat, or was that just a demo and this is the real thing now?

Sandra Matz:
No, no, no. So, yeah, as part of the story, it's like one of the first things I learned about him is that I think he's borderline OCD because he just sorts everything. He's like the person who sorts his socks by color. We just moved apartments, which is with a one-year-old. Not the most fun thing to do in the world, and there were like boxes.
You could barely walk around the apartment. I just opened one of the drawers, and he had put the cutlery like perfection. I'm like, "There's a hundred thousand boxes in this place. I can barely find anything for the baby, but I'm really glad that you spent at least an hour perfecting the organization of the cutlery." So, yeah, it is still.

Guy Kawasaki:
I hope your new place has a dishwasher so he can load the utensils in the tray.

Sandra Matz:
Tell me about it. That's exactly what happens. Yeah. I'm not allowed to touch the dishwasher anymore because I don't do it perfectly. So you're spot on. Yeah.

Guy Kawasaki:
So, you listeners out there, basically, we have an expert in psychological targeting, and now she's explaining how she had absolutely no targeting in meeting her future husband, right?

Sandra Matz:
I think I nailed it from the beginning. I looked at his place, I looked at him being put together, and it gave me a pretty good understanding, I think, of who he was. I feel like I know what I signed up for.

Guy Kawasaki:
Okay. So this is proof that her theories work. So I've already said this word, "psychological targeting," twice. This is an easy question to start you off now that we got past the porn part of this podcast which is, from a psychological targeting perspective, what's your analysis of the 2024 election?

Sandra Matz:
It's a, I mean, interesting one. So psychological targeting typically looks at individuals. So it's trying to see what can we learn about the psychological makeup of people not by asking them questions, but really observing what they do. Right? You can imagine in the analog world, I might look at how someone treats other people, whether they're organized as my husband is, and I think you can learn a lot by making these observations.
That's true in the offline world. It's also true in the online world. I think if you just look at them, presidential candidates, the way that they talk, if Trump writes in all caps all the time and doesn't necessarily give it a second thought before something comes out on Twitter, I think that is an interesting glimpse into what might be going on behind the scenes.

Guy Kawasaki:
Do you think that his campaign did really great psychological targeting of the undecided in the middle, or from an academic perspective as a case study, how would you say his campaign was run?

Sandra Matz:
I talk a lot about psychological targeting because for me, it's interesting to understand how the data translates into something that we can make sense of as humans. Right? So if I get access to all of your social media data, an algorithm might be very good at understanding your preferences and motivations, and then play into these preferences.
But I as a user, as a human can't really make sense of a million data points at the same time. If I translate it to something that tells me whether you're more impulsive, or more neurotic, or more open-minded, that just goes a long way in saying, "Okay. Now, I know which topics you might be interested in the context of an election," or how I might talk to you in a way that most resonates.
Now, politics is an interesting case because ideally, a politician would go knock on every door, have a conversation with you about the stuff that you care about, and obviously, they don't have the time. So there's, I think, a lot of potential of using some of these tools to make politics better. But obviously, I think the way that some of these tools were introduced in the context of the 2016 election, it really shows the more dark side, and I don't know if they're using any of these tools on the campaign trail.
I think there are many ways in which you can use data to drive engagement that's not necessarily based on predictions of your psychology at the individual level, but certainly, this idea that the more we know about people and their motivations, their preferences, dreams, fears, hopes, aspirations, you name it, the easier it is for us to manipulate them.

Guy Kawasaki:
Well, in politics as well as marketing, which you bring up in your book, I got the feeling that what you're saying is that you psychologically target people with different messages, but you could have the same product. So, in a sense, you're saying that yes, with the same product, whether it's Donald Trump, or an iPhone, or whatever, a Prius, you can change your messaging to make diverse people buy the product. So did I get that right, or am I imagining something that's nefarious, actually?

Sandra Matz:
I think it depends on how you think about this because the fact that we talk to people in different ways all the time. So imagine a kid who wants the same thing. The kid wants candy. The kid knows exactly that they should talk to their mom in one way and that they should talk to their dad in a different way.
So the goal is exactly the same. The goal is to get the candy, but we're so good as humans making sense of who's on the other side, understanding what makes them tick, "How do I best persuade them to buy something?"
The same is true, I think, in politics and marketing. The more that we understand where someone is coming from and where they want to be in the end, the easier it is for us to sell a product. Right? So products have the benefit that it's not just what you buy. Right? A lot of the times, we buy products because they have this meaning to us. They help us express ourselves. They serve a certain purpose.
If we can figure out what's the purpose of a camera for a certain person, what's the purpose of the iPhone for a camera, or why do people care about immigration ascertainment, why do people care about climate change, is it because they're concerned about their kids, is it because they're concerned about their property, then I think we just have a much easier way of tapping into some of these needs.
Whether that's offline, when we, again, talk to our three-year-old, not in the same way that we talk to our boss and our spouse, or whether that's marketers doing that at scale, it's really the more you understand about someone, the more power you have over their behavior.

Guy Kawasaki:
So are you saying that at an extreme, you could say to a Republican person the reason why we have to control the border is because of physical security, where to a liberal, there’s a different message, but in both cases, you want to secure the border, one for maybe job displacement, another for security. I mean, it would be different, but the same product in a sense.

Sandra Matz:
Yeah. So 100 percent. There's all of this research, and this is actually is not my own. It's very similar to psychological targeting. In that space, it's usually called moral reframing or moral framing. So the idea that once I understand your set of moral values. Right? So there's a framework that describes these five moral values, the way that we think about what's right or wrong in the world. That's how I think about it myself. Some of it is loyalty, fairness, care, purity, and authority.
What we know is that across the political spectrum, so from liberal to conservative, people place different emphasis on some of these values. So if you take a liberal, typically, they care about care and fairness.
So if you make an argument about immigration again or climate change, it doesn't matter, that's tapping into these values, you're more likely to convince someone who's liberal. Now, if you take something like loyalty, authority, or purity, you're more likely to convince someone who's more conservative.
For me, the interesting part is that as humans, we're so stuck with our own perspective. Right? If I, as a liberal, try to convince a conservative that immigration might be a good thing, I typically make that argument from my own perspective. So I might be very much focused on fairness and care, and it's just not resonating with the other side because it's not what they are coming from.
Algorithms, because they don't have an incentive, they don't necessarily have their own perspective on the world that's driven by ideology. It's oftentimes much easier for them to say, "I try and figure out what makes you care about the world, what makes you think about what's right or wrong in the world, and now I'm going to craft that argument along those lines."
What's interesting for me is that depending on how you construe it, it can either be seen as manipulation, so I'm trying to convince you of something that you might not otherwise believe, but it could also be construed as I'm really trying to understand how you think about the world, I'm really trying to understand and engage with you in a way that doesn't necessarily come from my point of view, but is trying to take your point of view. So it really has, for me, these two sides.

Guy Kawasaki:
So I could say to a Republican is, "The reason why you want to support the H-1B Visa program is because those immigrants have a history of creating large companies, which will create more jobs for all of us," which is a very different pitch?

Sandra Matz:
Yeah, and so in addition to the fact that we can just tap into people's psychology, there's also this research that I love. It was done, I think, in the context of climate change, but it's looking at what do people think the solutions to problems are and how does that relate to what they believe in any way, right?
If I tell you, "Well, solving climate change means reducing government influence. It means reducing taxes," then suddenly, Republicans are like, "Oh my god, climate change is a big problem because the solutions are very much aligned with what I believe in any way."
If you tell that to Democrats, they're like, "Actually, it's not such a big deal because I don't really believe in the solution." So the way, I think, that we play with people's psychology and how they think about the world and show up in the world just means that oftentimes, it gives us a lot of power over how they think, feel, and behave.

Guy Kawasaki:
Another point that I hope I interpreted correctly is I've been trained so long to understand the difference between correlation and causation, right? So if you wear a black mock turtleneck, so did Steve Jobs, so you should wear a black mock turtleneck because you'll be the next Steve Jobs. Well, it didn't quite work out that way for Elizabeth Holmes, but I think you take a different direction.
I just want to verify this. So you don't really discuss correlation versus causation. In a sense, what you're saying is that there doesn't need to be a causative relationship if there is a predictive relationship that you can harness. So, I don't know. If for some reason, we noticed a lot of people with iPhones buy German cars. Well, that's predictive. I don't have to understand why that's true.

Sandra Matz:
Yeah. No. Totally. I'll give you an example that I think is interesting. So one of the relationships that I still find fascinating that we observe in the data that I don't think I would've intuited even as a psychologist is the use of first-person pronouns. People post on social media about what's going on in their life, and I remember being at this conference.
It's a room full of psychologists, and this guy who was really a leading figure, James Pennebaker in the space of natural language processing, he comes up, and he just asks the audience, "What do you think the use of first-person pronouns? So just using 'I,' 'me,' 'myself' more often than other people, what do you think this is related to?"
I remember all of us sitting at a table, and we're like, "It's got to be narcissism. If someone talks about themself constantly, that's probably a sign that someone is a bit more narcissistic and self-centered than other people." Turns out that it's actually a sign of emotional distress. So if you talk a lot about yourself, that makes it more likely that you suffer from something like depression, for example.
Now, taking a step back, it actually makes sense, right? If you think back to the last time that you felt blue, or sad, or down, you probably were not thinking about how to fix the Southern border or how to solve climate change. What you were thinking about is like, "Why am I feeling so bad? Am I ever going to get better? What can I do to get better?" This inner monologue that we have with ourselves just creeps into the language that we use as we express ourselves on these social platforms.
Now, the causal link is not entirely clear, right? It could be that I'm just using a lot more first-person pronouns because I have this inner monologue. But what you see in the language of people who are suffering from emotional distress is all of these physical symptoms, so just being sick, having body aches.
Again, it's not entirely clear if maybe you're having a hard time mentally because you're physically sick, but also maybe you're physically sick because you're having a hard time with the problems that you're dealing with.
So, on some level, I don't even care that much, right? If I'm just trying to understand and say, "Is there someone who might be suffering from something like depression who's currently having a hard time regulating their emotions?"
I don't necessarily care if it's going from physical symptoms to mental health problems or the other way. What I care about is if I see these words popping up or if I see some of these topics popping up, that's an increase in the likelihood that someone is actually having a hard time right now.
Now, I think what is interesting is that the more causal these explanations get, and these relationships get, oftentimes, they're a lot more stable.
So it could be that if it's like a causal mechanism, and first of all, it allows us to understand something about interventions, like how do we actually then help people get better, and they're also, oftentimes, the ones that last for longer because it's not something just a fluke in the data that maybe goes this direction or the other, but it's something that is really driving it on a more fundamental level.
So you're absolutely right in that oftentimes, when we think of prediction, we don't need to understand which direction it goes in. It's still helpful to know if you think of interventions.

Guy Kawasaki:
So, at a very simplistic level, could you make the case to a pharmaceutical company, "Look at a person's social media, and if the person is saying 'I' a lot, sell them some Lorazepam or some anti-anxiety drugs?" Is it that simple?

Sandra Matz:
I personally probably not go to the pharma companies and make that proposition, but it is that simple. Again, one of the points that I make in the book that is super important to me is that those are all predictions with a lot of error, right? So it means that on average, if you use these words more, you're more likely to suffer from emotional distress. That doesn't mean that it's deterministic. There's a lot of error at the individual level.
So if I'm a pharma company and I want to sell these products, yeah, on average, I might do better by targeting these people, but it still means that we're not always going to get it right. Then, on the other side, what is interesting for me is if you think about it not from the perspective of a pharma company, but from the perspective of an individual, I think there's ways in which we can acknowledge the fact that it's not always perfect, right?
You just have this early warning system for people who know, for example, that they have a history of mental health conditions, and they know that it's really difficult once they're at this value of depression to get out.
So they could have something on their phone that just tracks their GPS records and sees that they're not leaving the house as much anymore, less physical activity, more use of first-person pronouns, and it almost has this early warning system. It just puts a flag out and says, "It might be nothing. It's not a diagnostic tool, there's a lot of error, but we see that there's some deviations from your baseline. Why don't you look into this?"
For me, those are the interesting use cases where we involve the individual acknowledging that there's mistakes that we make and the predictions, but we're using it to just help them accomplish some of the goals that they have for themselves.

Guy Kawasaki:
So speaking of interesting use cases, would you do the audience a favor and explain how you help the hotel chain optimize their offering because I love that example?

Sandra Matz:
Yeah. It was one of the first projects and industry collaborations that we did when I was still doing my PhD. There's many reasons for why I actually liked the example, but the idea was that we were approached by Hilton, and we worked with a PR company.
The idea of Hilton was, "Can we use something like psychological targeting?" So really tapping into people's psychological motivations, what makes them tick, what makes them care about vacations, and so on to make their campaigns more engaging and then also, sell vacations that really resonated with people.
What I like about the example is that Hilton didn't say, "We're just going to run a campaign on Facebook and Google where we just passively predict people's psychology, and then we try to sell them more stuff."
They turned it into this mutual two-way conversation where they said, "Hey, we want to understand your traveler profile, and for us to be able to do that, if you connect with your social media profile, we can run it through this algorithm that actually we don't control. The University of Cambridge is doing it. We don't even get to see the data, but what we can do is we can spit out this traveler profile and then make recommendations that really tap into that profile."
So it was this campaign, and you can imagine that doing that increased engagement. People were excited about sharing it with friends. It was essentially good for the business's bottom line, but it also gave, I think, users the feeling that it's a genuine value proposition. So there was a company that operated, first of all, with consent because it was all, "It's up to you whether you want to share the data or not. Here's how this works behind the scenes. Here's what we give you in return."
So it was very transparent with the entire process, and it was also transparent in terms of, "Here's what we have to offer. It's by understanding your traveler profile. We can just make your vacation a lot better." So that's one of the reasons for why I like this example a lot.

Guy Kawasaki:
Now, just as a point of clarification, you said the University of Cambridge, right?

Sandra Matz:
Yeah.

Guy Kawasaki:
Which has nothing to do with Facebook and Cambridge Associates, right?

Sandra Matz:
With Cambridge Analytica, it has nothing to do at all. It was funny because I get mixed up with them all the time. Not surprising because I got my PhD there on the same topic. I mean, our idea originated there, right? The idea that we could take someone's social media profile, predict things about their psychology originated at Cambridge, and that's where it was taken from, but we were involved.
For me, it's almost like a point of pride and a point that made me think about the ethics a lot is we helped the journalists break the story. So when the journalists, first, in Switzerland were working on trying to see what happened behind the scenes of Cambridge Analytica, we just help them understand the science, how can you get all of the data, how do you translate it into profile. So, yeah, not related to Cambridge Analytica in any way other than trying to take them down.

Guy Kawasaki:
Okay. So I misspoke. I said Cambridge Associates, not Analytica. So if you work for Cambridge Associates, if there's such a thing out there, I correct myself.

Sandra Matz:
I'm not sure.

Guy Kawasaki:
So, listen. This is a very broad question, but in the United States, who owns my data, me or the companies?

Sandra Matz:
Well, as you might have imagined, it's typically not you. So the US is an interesting case because it very much depends on the state that you live in. So Europe, I would say, has the strictest data protection regulation. So they very much try to operate on these principles of transparency and control, and giving you at least the ability to request your own data, to delete it, and so on and so forth.
In the US, California is the closest. So California's CCPA, which is the Consumer Privacy Act, I can't remember the exact name, but this is very close to the European Union principles where you as a producer of data, and even though companies also can hold a copy, you at least get to request your own data.
In most parts of the US, the data that you generate, you don't even have a shot at getting it because it sits with the companies, and you don't even have the right to request it. So I think we're a very long way from this idea that you're not just the owner of the data, but it's also you can limit who else has control to it.

Guy Kawasaki:
So I live in California.

Sandra Matz:
There we go.

Guy Kawasaki:
So you’re telling me there's a way that I could go to Meta, or Apple, or Google and say, "I want my data, and I don't want you selling it?"

Sandra Matz:
That's a great question. What you can do is you can request a copy of your data. That's one thing. In many states, you can't even do that do. You might generate a lot of medical data, social media data, and even though you generated it, you can't even request a copy. Now, what you can do is you can go to Meta, request a copy, and you can also request it to be deleted or to be transferred somewhere else.
Now, it's still really hard to say, "I want to use a service and product." This is one of the things that I think makes it really challenging for people to manage their data properly because it's a binary choice. You can say, "Yeah, I want you to delete my data, and I'm not going to use this service anymore." But then, you also can't be part of Facebook.
Yes, there are certain permissions that you can play with, what is public, what is not public. You can even play around with, "Here's some of the traces that I don't want you to use in marketing."
But typically, and this is true for, I think, still Meta and other companies, it's usually a binary choice. Either you use a product with most of your data being tracked and most of your data being commercialized in a way that you might not always benefit from, but you get to use the product for free, or you don't use it at all.
I think that's the dichotomy that's really hard for the brain to deal with because if the trade-off that we have to make as humans is service, convenience, the ability to connect with other people in an easy way, that's what we're going to choose over privacy, and maybe a risk of data breaches in the future, and maybe a risk of us not being able to make our own choices.
So I think there's now ways in which you can somehow eliminate that trade-off because I think if that's what we're dealing with, it's an uphill battle.

Guy Kawasaki:
I need to go dark for a little bit here. I read in your book about the example of Nazis.

Sandra Matz:
Yeah.

Guy Kawasaki:
I just want to know, today, could the Nazis go to Facebook, Apple, and Google, and get enough information from the breadcrumbs that we leave to track down where all the Jewish people are? Would that be easy today?

Sandra Matz:
I think it would be incredibly easy, and it's one of these examples in the book that I think is hard to process, and that's why it's so powerful. I teach this class on the ethics of data, and there's always a couple of people who say, "I don't care about my privacy because I have nothing to hide, and the perks that I get, they're so great that I'm willing to give up my privacy."
What I'm trying to say is that it's a risky gamble, but first of all, it's a very privileged position because just because you don't have to worry about your data being out there doesn't mean that it doesn't apply to other people.
So I think in the US, even the Roe versus Wade Supreme Court decision to meddle with abortion rights, I think overnight, essentially, made women across the US realize, "Hey, my data being out there in terms of the Google searches that I make, my GPS records showing where I go, me using some period tracking apps, it's incredibly intimate, and it could, overnight, be used totally against me."
So the example that you mentioned about Nazi Germany is such a powerful one because it shows that leadership can change overnight, and I care so much about it because I, obviously, grew up in Germany. It was a democracy in 1938, and then the next year, it wasn't. What we know is that atrocities within the Jewish community across Europe totally depended on whether religious affiliation was part of the census.
So, you can imagine, if you have a country where whether you're Jewish or not is written in the census, all that Nazi Germany had to do is go to city hall, get hold of that census data, and find the members of the Jewish community, made it incredibly easy to track them down.
But of course, you don't even need that census data anymore because you can now have all of this data that's out there that allows us to make these predictions about anything from political ideology, sexual orientation, religious affiliation just based on what you talk about on Facebook.
Even, you could make the argument that maybe it's the leaders of those companies handing over the data voluntarily. I think we've even seen in the last couple of days how there's this political shift in leadership when it comes to the big tech companies.
But even if they weren't playing the game, it would've been easy for a government to just replace the C-suite executives with new ones that are probably much more tolerant to some of the requests that they have. I think it's terrifying, and I think it's a good example for why we should care about personal data.

Guy Kawasaki:
Okay. So what you're saying is if I look at pictures of the inauguration, and I see Apple, Google, Meta, Amazon up on stage, and so now the government can say, "According to Apple, you were in Austin, then you landed in SFO, and then according to your Visa statement, you purchased this. According to your phone's GPS, you went to a Planned Parenthood in San Francisco, California. So we suspect you of going out of state to getting an abortion, so we're opening up an investigation of you." That's all easy today?

Sandra Matz:
I think it's very easy, and again, I'm not saying that the leaders of those big tech companies are sharing the data right now, but it's certainly possible. For me, there's this thing that I have in the book is data is permanent and leadership isn't, right? So once your data is out there, it's almost impossible to get it back, and you don't know what's going to happen tomorrow.
Even if Zuckerberg is not willing to share the data, there could be a completely new CEO tomorrow who might be a lot more willing to do that. So I think that the notion that we don't have to worry in the here and now about our data being out there is just a very short-sighted notion. Ideally, we can find a system, and I think there are ways now in which we can get some of these perks and some of the benefits, and they come from using data without us necessarily having to collect the data in a central server.

Guy Kawasaki:
Okay. So if I'm listening to this and I'm scared stiff because yes, you could look at what I do, you could look at I went to the synagogue, or I went to the temple, or whatever.

Sandra Matz:
Yeah.

Guy Kawasaki:
So, yeah, and you're right, any of those people could replace and who knows. So then, what do I do?

Sandra Matz:
I do think that people should be, to some extent, scared. So I'm really trying to not say that technology, like we're all doomed because the data is out there. I think there's many abuse cases, but I do think we should be changing the system in a way that protects us from these abuses.
The one thing that I described in the book, which I think we're actually seeing a lot more of, but just not that many people know of, are these technologies that allow us to benefit from data without necessarily running the risk of a company collecting it centrally.
So what I mean is, and there's a technology that's called federated learning, and you can imagine the example that I give is, take medical data.
So if we want to better understand disease and we want to find treatment that work for all of us, not just the majority of people who usually the pharma companies collect data of, but we'd want to know, given my medical history, given my genetic data, here's what I should be doing to make sure that I don't get sick in the first place, or I can treat a disease that's either rare or not as easily understood, we would all benefit from pooling data and better understanding disease.
Now, there's a way in which you can say, "Instead of me sending all of this data to a central server," and now, this entity that collects all of the data, they have to safeguard it. Same way that Facebook is supposed to safeguard your data against intrusion from the government. Instead of having this sit in the central server, what we can do is we can make use of the fact that we all have supercomputers. Right?
That might be your smartphone. Your smartphone is so much more powerful than the computers that we used to launch rockets to space a few decades ago. So what this entity that's trying to understand disease could do is they could essentially send the intelligence to my phone or ask questions from my data and say, "Okay. Here's how we're tracking your symptoms. Here's what we know about your medical history."
But that data lives on my phone, and all I'm doing is I'm sending intelligence to the central entity to better understand the disease. Apple's Siri, for example, is trained that way. So instead of Apple going in, and capturing all of your speech data, and centrally collecting it, right now, Apple would be one of these companies who has to protect it now and tomorrow, and they just send the model to your phone.
So they send Siri's intelligence to your phone. It listens to what you say. It gets better at understanding. It gets better at responding. Instead of you sending the data, it essentially just sends back a better model. It learns, it updates, sends back the model to Siri, and now everybody benefits because we have a better speech. That's a totally different system because we don't have to collect the data in a central spot and then protect it.

Guy Kawasaki:
But Sandra, I mean, the point that you just made is that, yeah, Tim Cook may be saying that to us now, "We're only sending you the model, and all your data is staying on your phone," but tomorrow, Apple's CEO could have a very different attitude. Right? So how do we know if they're only still sending the model right now?

Sandra Matz:
So I think it's a great question, and it's funny that you mentioned Apple in that space because I think they're thinking about it this way. So, again, I would much rather have Tim say, "We're only going to locally process on your phone." Even if they change it tomorrow, what I'm mostly worried about is that they collect my data today under Tim Cook with the intention of making my experience better.
They collect it today, and then tomorrow, there's a new CEO because now that CEO can just go back into the existing data and make all of these inferences that we talked about that are very intrusive and we don't want to be out there.
At least, even if Apple decides tomorrow to shift from that model to a new one, that's going to be publicly out there. So if that happens, at least people can start from scratch and decide whether they still want to use Apple product or not. My main concern is that all the data that gets collected and now leadership changes.

Guy Kawasaki:
Wow. Okay. Speaking of collected data, you mentioned an example of a guy who applied to a store. He took a personality test, and the personality test yielded, let's say, undesirable traits. So he didn't get that job, and that personality test stuck with him and hurt his employment in the future too. So what's the advice? Don't take the personality test or lie on the personality test? What's the guy supposed to do if he's required to take a personality test to apply for a job?

Sandra Matz:
Yeah, and you're really going to all of the dark places, which I think is important because for me, this example, and this one is not even using predictive technology, right? So this one is a guy sitting down and admitting that I think in his case, he was suffering from bipolar disorder.
So it sends the score on neuroticism, which is one of the personality traits that says how you regulate emotions through the roof. Because he admitted to that, he was essentially almost discarded from all of the jobs that had a customer-facing interface because companies were worried that he wouldn't be dealing well with people who come and complain.
Now, the reason for why I think this example is important is it just means who other people think we are closes some doors in our lives. Right? Sometimes it opens doors. If someone thinks that you're the most amazing person and you absolutely deserve a loan, maybe you have opportunities that other people don't have. But oftentimes, the danger comes in when someone thinks that we have certain traits that then would lead to behavior that we don't want to see.
Now, in the context of self-reported personality tests, at least you have some say over what that image is. If you take it to an automated prediction of an algorithm, and coming back to this notion that those algorithms are pretty good at understanding of psychology, but they're certainly not perfect. So now you suddenly live in a world where someone make a prediction about you based on the data that you generate.
You never even touch that prediction because you don't even get to see it. They predict that you're neurotic, and maybe they even get it wrong. Maybe you're one of the people where the algorithm makes a mistake and gets it wrong, and now, suddenly, you're excluded from all of these opportunities for jobs, loans, and so on.
So I think, for me, this notion that there's someone who passively tries to understand who you are and then takes action, that, again, sometimes open doors. Sometimes it's incredibly helpful because maybe we connect you with mental health support. But at other times, it might also close doors in a way that you don't even have insights to. For me, that's the scary part where I feel like we're losing control over, essentially, our lives.

Guy Kawasaki:
Wait, but are you saying that you should refuse to take the personality test, or you should lie?

Sandra Matz:
So in the case of the personality test, first of all, it's not a good practice. So, as a personality psychologist, the way that we think of these personality tests is that it shouldn't be an exclusion criteria. So I think that what they're meant to do is to say, "Here's certain professions that you might just be more suited for," because if you are an introvert who hates dealing with other people and you're constantly at the forefront of a sales pitch, you're probably not going to enjoy it as much.
They were never really meant to say, "You got a low score on conscientiousness, and we're going to exclude you." It's also very short-sighted because technically, what makes a company successful and what makes team successful is to have many people who think about the world differently.
So I have this recent research that's still very preliminary, but it's looking at startups, and it just looks at how quickly do they manage to hire people with all of these different traits. So you can come together, and you can say, "Well, but I think this way, and then you think this way, and we all bring a different perspective to the table." They're usually more successful. So this notion that companies just say, "Here's a trait that we don't want to see," is very short-sighted.
What we do know, and this is, I promise, coming back to your question, is that saying that you don't want to respond to a questionnaire is typically seen as the worst sign. So there was this study where they looked at things that people don't like to admit to. I think it was stuff about health, stuff about people's sexual preferences, and saying, "I don't want to answer the question," is worse than hitting the worst option on the menu.
So I absolutely agree that in that case, the guy essentially didn't have a shot, but the problem is once it's recorded, he didn't even get to take the test again because the results were just shared from company to company.

Guy Kawasaki:
So what I hear you saying is lie.

Sandra Matz:
In this case, frankly, if it had been me, I probably would've lied.

Guy Kawasaki:
Okay.

Sandra Matz:
If the company is making the mistake of using the test in that way, what I would recommend to people taking the test is, yeah, think about what the company wants to hear.

Guy Kawasaki:
Okay.

Sandra Matz:
Which is harder to do with data, by the way. It's funny because oftentimes, when we think of predictions of our psychology based on our digital lives, we think of social media, and it's always, "But I can, to some extent, manipulate how I portray myself on social media." That's true for some of these explicit identity claims that we think about and have control over. There's so many other traces. Take your phone again.
My thing is that I'm not the most organized person, even though I'm German. So I think I was expelled for a reason. I don't organize my cutlery the way that my husband does. Would I admit to this happily on a personality test in the context of an assessment center? Probably not, right? If someone gives me the questionnaire that says, "I make a mess of things," would I be inclined to say, "I strongly agree?" Maybe not because I understand that's probably not what they want to hear.
Now, if they tap into my data, they see that my phone is constantly running out of battery, which is one of these strong predictors of you not being super conscientious. Constantly, I go to the deli on the corner five times a day because I can't even plan ahead for the next meal, and I constantly run to the bus.
So if someone was tapping into my data, they would understand 100 percent that I'm not the most organized person. So there's something about this data world and all of these traces that we generate which are, in a way, much harder to manipulate than a question on a questionnaire.

Guy Kawasaki:
Well, and now, people listening to this podcast, they're thinking, "How many times did I use the pronoun 'I?' Oh my god, I'm telling people that I have depression and stuff."

Sandra Matz:
Again, it's not deterministic. So you might be using a lot of "I" because something happened that you want to share. It's just on average, it increases your likelihood.

Guy Kawasaki:
Yeah. Okay. So you had a great section about how, by looking at what people have searched Google for, you can tell a lot about a person or at least draw a conclusion. So do you think prompts will have the same effect, like what I ask ChatGPT is a very good window into what I am?

Sandra Matz:
I think so. Right? I don't necessarily think it's prompts. I think it's questions that we have. If you think about Google, there's these questions that I type into the Google search bar that I wouldn't feel comfortable asking my friends or even sharing with my spouse.
So it's like this very intimate window into what is top of mind for us that we might not feel comfortable sharing with others. Yeah. So I was actually, which I thought was so interesting because I was part of this. It was like a documentary, but artistic, and what they did is they invited a person.
So they found a person online, they looked at all of her Google searches, and then they recreated her life all the way from, "Here's the job that she took, suffered from anxiety and the feeling that she wasn't good enough in the space that she was working in," all the way to her becoming pregnant and then having a miscarriage. They recreated her life with an actress, and then at some point, bring in the real person.
The person watches the movie, and you can see how just over time, she realizes just how intimate those Google searches are because what the documentary team had created, the life that they had recreated was so close to her actual experience, and again, just by looking at their data. So, for me, it was a nice way of showcasing that it's really not just this one data point or collection of data points, but it's a window into our lives and our psychology.

Guy Kawasaki:
Yeah, and not to get too dark, but the CEO of Google was on the stage, right? So what happens when generative AI takes over, and the AI is drafting my email, drafting my responses? To take an even further step, what happens when it's my agent answering for me?

Sandra Matz:
Yeah.

Guy Kawasaki:
Then, is it still as predictive, or will the agent reflect who I really am, or it throws everything off because it's not Guy answering anymore?

Sandra Matz:
Yeah. So, to me, that's a super interesting question. First of all, in a way, like generative AI democratized the entire process. So when I started this research, we had to get a dataset that takes your digital traces, let's say, what you post on social media and maybe a self-report of your personality, and then we train a model that gets from social media to your personality.
Now, I can just ask ChatGPT and say, "Hey, here are Guy's Google searches. Here's what he bought on Amazon. Here's what we talked about on Facebook. What do you think is his big five personality traits? What do you think are his moral values? What do you think is, again, some of these very intimate traits that we don't want to share?" It does a remarkable job. It's never been trained to do that, but because it's read the entire internet, it is to understand so much about psychology.
Then, obviously, taking it to the next level, it's not just understanding, but also replicating your behavior. The one thing that I'm most concerned about aside from manipulation, it's just that it's going to make us so boring if these language models, they're very good at coming up with an answer that works reasonably well, like 80 percent, but it's very unlikely that it comes up with something super unique that we've never thought about that makes us different from other people.
So I think what happens is that we're just going to see more and more of who the AI believes we are because it's essentially almost like the solidified system of here's who I think Guy is, and now I'm just optimizing.
In the way that humans learn, there's this trade-off between exploitation. So that is doing the stuff that you know is good for you. So if you think about restaurant choices, you can either go to the same restaurant time and again because you know that you like it. So there's not going to be any surprise. It's going to be a good experience.
But the second part of human learning and experience is the exploration part, and it exposes you to risk because maybe you go to a restaurant, and it turns out to be not great, and you would've been better off going to your typical choice, but maybe you actually also stumble on a restaurant that you love, and for that, you had to take the risk and explore something new.
My worry with these AI systems and most types of personalization is that they very much focus on exploitation. They take what you've done in the past, who they think you are, and they try to give you more of that, but you don't get the fun parts of exploring.
It's like Google Maps is amazing at getting you from A to B most efficiently, but you also never stumble upon these cute little coffee shops that you didn't know were there before because you got lost. For me, that's, in a way, the danger of having these systems replace us. Is that just going to make us basic and boring?

Guy Kawasaki:
What if I asked the opposite question, which is I want to help companies be more accurate in predicting my choices. Right? So I want to tell Google, "Stop sending me World wrestling news in Google News, and stop telling me about the Pittsburgh Steelers, and stop sending me ads for trucks because I don't want a truck and I don't want to Tesla." I want to make a case that what if you want companies to understand you better, then what do you do?

Sandra Matz:
First of all, I think it should be an option, right? So there should be two different modes for you, Guy, that says, "Right now, I'm trying to explore. Right now, I just want to see something that's different to what I typically want. But also, now, I'm in this mode where I just want you to know exactly what I'm looking for, and I don't want you to send me the camera even though I was not interested in the camera for the last three weeks."
So, in this case, I think what companies can do, which is what they, I think, oftentimes, don't do enough of. So it's like having a conversation with you that allows you to interact with the profile. Right? Most of the time, they just passively say, "Here's who I think Guy is, and now we're optimizing for their profile."
But if they get it wrong, there's no way for you to say, "No, no, no. Why don't you just take out this prediction that you've made because it's not accurate," which is annoying for me because now, as you said, you get ads for wrestling that you might not be interested in at all, and it's also bad for business because now they're optimizing for something that is not who you are.
So I think first of all, give people the choice whether they want to be in an explorer mode or an exploitation mode. Then, second part is even within the exploitation mode where we're just trying to optimize for who we think you are, give people the choice and say, "No, you're wrong. I want to correct that." It's good for the user, and it's good for the company.

Guy Kawasaki:
Well, if anybody out there is listening in and embraces this idea, I suggest you not call it "Exploitation Mode."

Sandra Matz:
Yeah, that's true.

Guy Kawasaki:
Maybe "Optimization Mode" might be more pleasant marketing.

Sandra Matz:
Personalization Mode. Yeah. That's true.

Guy Kawasaki:
Personalization Mode. Yeah. Okay. So some three short tactical and practical questions. So knowing all that you know, and I think we went dark a few times and show people the risk here, so do you use email messages, WhatsApp, or Signal? What do you use personally?

Sandra Matz:
I mostly use WhatsApp. First of all, it's encrypted, but then it also just what everybody in Europe uses. So I wouldn't even give myself any credit for that, and it's funny because I think the fact that I've become a lot more pessimistic over the years has to do with my own behavior. So I know that we can be tracked all the time, and I still mindlessly say yes to all of the permissions and so on and so forth.
So I think we just don't have the time and the mental capacity to do it all by ourselves. There's only twenty-four/seven in a day, and I'd much rather spend a meal with my family than going through all the terms, and conditions, and permissions. So I think if it's just up to us, it's an unfair battle that we don't stand a chance.

Guy Kawasaki:
Why of all people in the world would you not default to Signal because it's encrypted, both the message and the meta information?

Sandra Matz:
It's mostly because not that many of my friends are using it. So, again, in this case, it would be a trade-off between I get protected more, but there's also a downside because I can't reach out to the people that I want to reach out. I feel like if that's the trade-off, the brains of most people will gravitate to, "I'm just going to get all of the convenience that I want."

Guy Kawasaki:
Okay. Second short question is, when you use social media, do you use it like read-only and you don't post, you don't comment, and don't like, or are you all-in on social media and dropping breadcrumbs all over the place?

Sandra Matz:
I think even if you don't use social media, even if I was completely absent from social media, I would still be generating breadcrumbs all the time because I have a credit card, and I have a smartphone, and there's facial recognition. I just don't want people to think that social media is the only way to produce traces.
Now, I don't actively use it as much, but not because I know that I shouldn't be doing it. It's just because it's so much work. I feel like I'd much rather have interesting offline conversations than thinking about what I should post on X and some of the other ones. So it's different reason than worries about privacy.

Guy Kawasaki:
Okay. Now, is the logic that yes, Google knows something, Apple knows something, Meta knows something, X knows something, everybody knows something, but nobody knows everything, so the fact that it's all siloed keeps me safe, or is that a delusion?

Sandra Matz:
I think it's probably a delusion. So my argument would be that they have most of these traces. So if you think of applications, again, like when you download Facebook here, it asks you to tap into your GPS records, into your microphone, into your photo gallery. You use Facebook to log in to most of the services that you're using elsewhere.
So they have a really holistic picture of what your life looks like across all of these dimensions. By the way, they also have it for users who don't use Facebook because it's so cheap now to buy these data points from data brokers that if I wanted to get a portfolio, a data portfolio on most of the people, I would be able to get it really cheaply.
That's something that, again, I think most of us or all of us should be worried about, and you do see use cases where policymakers are actually waking up to this reality. There was this case of a judge actually across the bridge from here in New Jersey whose son was murdered by someone that she litigated against in the past.
They found her data online from data brokers, tracked her down, and in this case, killed her son. Biden signed something into effect that now protects judges from having their data out there with data brokers, which makes me think if we do this for judges and we're concerned that we can easily buy data about judges, why not protect everybody else, right?

Guy Kawasaki:
Everybody.

Sandra Matz:
I think there's a good point to be made that data on us is so cheap and available from different sources that even if you don't use social media, it's easy to get your hands on.

Guy Kawasaki:
You introduced the concept in the last part of your book, which I don't quite understand. So please explain what a data co-op does.

Sandra Matz:
Yeah. It's one of my favorite parts of the book actually because it thinks of how do you help people make the most of their data, right? So we've talked a lot about the dark sides, and I think regulation is needed if we want to try to prevent the most egregious abuses, but it doesn't really give you a way of, first of all, managing your data in the absence of regulation.
It also doesn't give you a way to make the most of it in a positive way. So data co-ops are essentially these member-owned entities that help people who have a shared interest in using data to both protect it and make the most of it. So my favorite example is one in Switzerland, it's called MyData, and they are focused on the medical space.
So one of the applications that they have is working with MS patients. So patients who suffer from multiple sclerosis, which is one of these diseases that, again, is so poorly understood because it's determined by genetics, and it's determined by your medical history, by your environment. What they do is they have a co-op of people, so patients who suffer from MS and healthy controls that own the data together.
So it's a little bit similar to the financial space where you oftentimes have entities that have fiduciary responsibilities. So they are legally obligated to act in your best interest. So data co-ops are entities that are owned by the members.
They are legally obligated to act in their best interest, and now, you can imagine, in the case of the MS patients, they can pull the data, they can learn something about the disease, and they can also then, in this case, work with doctors of the patients and say, "Here's something that we've learned from the data. This treatment might be particularly promising for a patient at this stage with these symptoms. Why don't you try this?"
So the people benefit immediately, and also, because now together, they can hire experts that help them manage the data, think about, "Well, here's maybe some of the companies that we want to share the data with, but maybe we do it in a secure place that doesn't require us to send all of the data." So these data co-ops, for me, is just a new form of data governance that gives us I think of it as allies.
So if we have a way that we want to use data, we need other people with a similar goal so that we make data, first of all, more valuable because if I have my data, my medical history, and my genetic data as an MS patient, it doesn't help me at all. I need these other people, but it's not coming together as a pharma company that's grabbing all of this data and then making profits, but it's coming together as a community and benefiting directly. So that's what data co-ops are.

Guy Kawasaki:
But a data co-op doesn't exactly solve the problem of all my breadcrumbs on social media, and Apple, and all the other stuff, right?

Sandra Matz:
Yeah.

Guy Kawasaki:
This is for a very specific set of data.

Sandra Matz:
Agreed. So it's not necessarily a specific set of data. You could imagine, in the European Union, where you're allowed to pull your data, you could have a data co-op of people who just pull together their Facebook data, and now they go to Facebook and say, "Hey, look, we're all going to leave if there's no way, if you're not putting in, let's say, technology like federated learning to protect our privacy a bit more."
So I do think that there is also ways in which people could come together and get just a lot more negotiation power at the table. Like, if you go to Facebook and say, "Hey, I'm Guy. I want to force you to do something different," not sure if they're going to listen. If you suddenly have ten million people doing that, you are in a better spot.

Guy Kawasaki:
Okay. I like this idea. Okay. Now, I understand it better. Thank you very much.

Sandra Matz:
Yeah.

Guy Kawasaki:
So, listen. I like to end my podcast with one question that I ask all the remarkable people, and clearly, you've proven you're remarkable with this interview, and that would be, stepping aside, stepping back, stepping up, whatever direction you want to use, what's the most important piece of advice you can give to people who want to be remarkable?

Sandra Matz:
I think it's don't take yourself too seriously. I think some humility and the way that you approach yourself and others goes a long, long way.

Guy Kawasaki:
Alrighty. This is a great episode. Thank you so much, and I hope I didn't go too dark for you, but this is a dark subject, actually.

Sandra Matz:
I do think it is, and I think there's a lot of room for improvement. That's why I care about the topic so much.

Guy Kawasaki:
Alrighty. So, Sandra Matz, thank you very much for being a guest. This has been Remarkable People. I'm Guy Kawasaki, and I hope we helped you be a little bit more remarkable today. So my thanks to Madisun Nuismer, the producer, Tessa Nuismer, our researcher, Jeff Sieh and Shannon Hernandez, who make it sound so great. So this is the Remarkable People Podcast. Until next time. Mahalo, and aloha.