Welcome to Remarkable People. We’re on a mission to make you remarkable. Helping me in this episode is Mike Caulfield.

Mike is a research scientist from the University of Washington’s Center for an Informed Public. He discusses his groundbreaking SIFT methodology and the importance of digital literacy in an era of online misinformation.

Caulfield’s SIFT methodology, which stands for Stop, Investigate the source, Find trusted coverage, and Trace back to the original, provides a practical framework for critically evaluating online content. By employing these strategies, educators and learners can develop the skills necessary to navigate the complex world of digital information and make informed decisions about what to believe.

In this episode, Caulfield also dives into insights from his book Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online, co-authored with fellow Remarkable People guest Sam Wineburg. The book offers valuable guidance on developing critical thinking skills and combating the spread of misinformation online.

Join Guy and Mike as they explore the crucial role of digital literacy in today’s information landscape and learn practical strategies to help you sift through the noise and find reliable, trustworthy sources. This episode is a must-listen for anyone looking to sharpen their media literacy skills and become a more discerning consumer of online content.

Please enjoy this remarkable episode, Mike Caulfield: Verified Methodology for Fighting Misinformation.

If you enjoyed this episode of the Remarkable People podcast, please leave a rating, write a review, and subscribe. Thank you!

Follow on LinkedIn

Transcript of Guy Kawasaki’s Remarkable People podcast with Mike Caulfield: Verified Methodology for Fighting Misinformation.

Guy Kawasaki:
I'm Guy Kawasaki, and this is Remarkable People. We're on a mission to make you remarkable. We have two ways to help you be remarkable right now. One is our book, Think Remarkable: 9 Paths to Transform Your Life and Make a Difference. I hope you'll read it.
The other path is today's podcast, and we have with us a guest named Mike Caulfield. He is a research scientist from the University of Washington. He works at the Center for an Informed Public. Basically, Mike is renowned for his SIFT methodology, S-I-F-T.
This is a crucial tool in the fight against online misinformation, and it empowers educators and learners to critically assess online content. Let me explain the acronym SIFT. S is for stop. I is for investigate the source. F is for find trusted coverage. And T is traced back to the original. In November 2023, Mike co-authored Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online, with another Remarkable People guest, Sam Wineburg.
Mike's dedication to Digital Literacy has not only earned him the 2017 MERLOT Award, but also recognition from top media sources such as The New York Times, NPR, and The Wall Street Journal. So let's welcome Mike Caulfield to Remarkable People. We're going to learn how to SIFT and figure out the truth of what we see and read and hear online. I'm Guy Kawasaki, this is Remarkable People. And here we go. Are you drinking Liquid Death?
Mike Caulfield:
No. I mean, yeah, I guess. Dr. Pepper, is that Liquid Death? Probably.
Guy Kawasaki:
Liquid Death is a brand. I'm not talking about the carcinogens in Dr. Pepper.
Mike Caulfield:
I mean, at some level, probably Liquid Death.
Guy Kawasaki:
All right, let's get serious here because we have to save democracy. So first question, how do you verify news stories?
Mike Caulfield:
So let me just set up a little frame about what we talk about when we're talking about verifying because I think sometimes people have a different conception of what you're doing. Generally, we're looking at a situation where someone has seen something on the internet, that might be a news story, that might be an article, that might be a website, whatever it is. They've seen something on the internet. They've had a reaction to it. Normally that reaction is one of two things.
Either this is absolutely evidence of everything I thought I believe and I believe this is right and so forth, or it is, oh, this is nonsense, this is foolishness, usually because it's something I don't believe. And so the question becomes, is the thing what you think it is? And this is a question we focused on the book instead of is it true or false. We accept that you've come to something. You've already had a reaction. By the time you're checking, something's already happened. You already have an impression.
The question isn't, what is this thing? The question is whether your impression was correct or whether your impression was wrong. When we talk about verifying news sources, what we're saying here is you see something on the web and you react to it maybe because it's called The Mississippi Ranger and you're like, oh, this is a local paper in Mississippi. I hope there's not a paper called Mississippi Ranger. If there is, it's a made up name. No libel.
Guy Kawasaki:
Wait, I got to go to the domain right now.
Mike Caulfield:
I should have written down some fake names I could use. But you see something called The Mississippi Ranger and you're like, oh, this is just a local paper in Mississippi covering an issue of something that happened there. And so the question is it, right? If that's your impression of it, was your impression correct? And what we suggest on something like that is going to Wikipedia.
So if it's a paper, if it's a paper of any size, and actually I worked on a project of getting local newspapers on Wikipedia for a while and coordinating that, if it's a paper of any size, you'll have a Wikipedia page.
You go there if there's a paper of this name. If there's not, it doesn't necessarily mean it's not a paper, but you might want to find something else. It might not be your best first stop.
Alternatively, you might go, and you might find that The Mississippi Ranger is one of a set of papers that's run by a political consultant who runs what is something we call a pink-slime network. I don't know if you heard this term, but it's a network of a lot of things that look like they're news producing sites, but really they're auto-generated out of this stuff and usually for some propaganda end.
It could turn out that's actually being run by a political consultant. And so when you say verify a source, part of what we're saying is, okay, if I thought I was getting this from a local news source and that was behind my impression that, oh, this is really useful evidence for what I believe or don't believe, and it turns out no, actually this is being run by a political consultant, or no, this is just a spam site, or no, no one's ever heard of this, then it's maybe not as useful to me.
And the way you do that, the way you get that context is with a news source, when you're checking the source, is to start with Wikipedia. If you can't find something on Wikipedia, type in the name of the source into something like Google News. See if it comes up in a Google News and maybe type something like funding, who funds this, location, basic sorts of things that would give you some context on that source.
Guy Kawasaki:
Okay, but. I was on the board of trustees of Wikipedia and nobody believes in Wikipedia more than me. All right? But what if somebody says in response to you saying check Wikipedia, "Oh, anybody can change anything on Wikipedia. Why would you use Wikipedia as your reference when anybody can say anything?"
Mike Caulfield:
Well, as you know, because, A, that's not really true on Wikipedia. That's true of Wikipedia in 2006. I hope I'm not being unfair here, but I was on Wikipedia in 2006. It was true in 2006. You could get on there. You could say a lot of things. Those things would not be noticed for long periods of time. So the first thing is Wikipedia in 2023 is not Wikipedia of 2006 or 2008.
There's just been a lot of effort on Wikipedia to build various bots, various things that look for things that don't have citation, vandalism, unsourced changes, new users coming in from unidentified IPs that are strangely editing a lot of pages with PR content, that sort of thing. So there's that issue there. It is true on some smaller pages you can get away with this and that in Wikipedia for a little bit of time.
That's not impossible. But in general, if it's a good Wikipedia page, you don't have to trust the Wikipedia page. Because you're going to come to the Wikipedia page, anything that is contested, could potentially be contested, is going to have a link, a footnote to it, and you're going to be able to use those links to verify it. And the thing too is it doesn't have to be perfect.
Wikipedia doesn't have to have an answer to every single one of these. Because this is the big thing, the web is abundant. If you came to The Mississippi Ranger, this is the source I want to use, and it turns out you can't find any information on The Mississippi Ranger, it's not like you're out of luck. It's the internet. You can go and find a source that you can actually find information on.
And so it doesn't have to be perfect because the question is not, is this specific source the perfect source for what I want to do, it's, is this source sufficient for what I want to do, or should I move on and find something else? You can move on and find something else where there's a better Wikipedia page.
Guy Kawasaki:
Do you think the day will ever come when you're asked this question and your answer is check ChatGPT or Claude or Bard or Gemini or anything? You obviously said check Wikipedia. You did not say check LLM.
Mike Caulfield:
Yeah, right now I wouldn't say check an LLM. There's a couple of reasons for that. They are improving, but a lot of them, the information's out of date, some of them have gotten better with that, they tend to do really well with structure. They don't always do as well with granular facts. They're masters of style and structure, but the granular facts have been a persistent problem.
And interestingly, there were some predictions that a lot of that would be ironed out by now, but there's a particular thing in an LLM that people may not realize, which is that those algorithms are set to have a little bit of flexibility in them, like a little bit of play.
Otherwise, you'd always get exactly the same prediction for every set of words in front. They'd never get that real generative quality. And that little bit of play that you have to put in there so that it can do some of these things is also the thing that is giving you what people call these hallucinations.
It's going to be a little tougher to work that out than I think people realize, because the same thing that's giving you some of the appearance of creativity in the LLM, the thing that people associate with the generativity is the same thing on the other side that's sometimes going too far and creating these hallucinations. I'm not saying that it'll never work out. The jury's out. For the moment, our recommendation in it is that people tend to think that LLMs are great tools for novices.
We actually think they tend to be better tools for either experts or sometimes you find cases where a person is an expert in one field and has some moderate knowledge in another to do that. But because you're looking at the output and you have to evaluate it, novices can get overwhelmed with what they would need to check.
Guy Kawasaki:
So since we're on the topic of Wikipedia and LLMs, do you think that LLMs are an existential risk to Wikipedia? Because people will go to their favorite LLM and just ask a question where they may have gone to Wikipedia before. So I understand that Wikipedia would be one of the best sources for LLMs, but what happens if people don't go to Wikipedia anymore and they just go to LLMs? It's the same threat for search engines.
Mike Caulfield:
Yeah, and it's even worse than that because the other worry is what if people that want to stack up Wikipedia credibility, Wikipedia clout, what if they start just using LLMs to write their parts of the Wikipedia page and now you're introducing a lot of these errors potentially into Wikipedia via the LLMs. So even if you go to Wikipedia, you're getting LLMs. And I know that Wikipedia is working on some ways to do some detection and so forth and some policies about what you can and can't do.
But yeah, I mean, it's an issue. Are LLMs an existential threat to Wikipedia? I think existential threats don't have to be successful. They just have to threaten your existence. You can have an existential threat that turns out not to result in the death of something.
And I think in that way, yeah, I think so. I think there's a future where LLMs could do that. That would be really sad because, of course, a lot of the productive capabilities of LLMs comes from a lot of people putting in time and writing things like Wikipedia.
It's a little bit of a, I forget, what's the opposite of parasitic system? A symbiotic. You can either have a parasitic system or a symbiotic system, and there's one future in which LLMs are parasitic. They take all the stuff that people have worked on that provide a value.
They suck out that value. They spit it back at the user. They erode the business model for these other things and just suck their host dry. And then there's a symbiotic future, which is a symbiote is like a parasite, but they live in a way that benefits their host organism.
And that symbiotic future, I think, could be one where we figure out how to make these things work together, play to their individual strengths. And we teach people, like when you want to go to one and when you go want to go to the other. But I think we need that symbiotic future, and I think part of that symbiotic future is people figuring out when it's best to consult something like a ChatGPT and when it's best to consult Wikipedia.
Guy Kawasaki:
Okay, because LLM vis-a-vis search engines, I know I search on Google a whole lot less these days. When I want to question, how do I add a HP printer to my Macintosh network, I used to go to Google for that and get 475,000 links. But now I go to Perplexity, which is the world's stupidest name for an LLM, but I go to one of these things and it gives me the answer, not links.
Mike Caulfield:
And this relates to something in our book, which is that what most people are looking for is a summary. Your average informational need is a summary. Because the number of things in which you're not an expert far exceeds the number of things in which you are an expert. And the business model for summary is not great. It hasn't been great for a while. The business model is in making an argument. You take all your facts and maybe you do a little bit of summarization, but you make an argument.
You say, "This is the way things should be," and you do that, or the business model isn't selling people things like, this will solve your problem, not I want to do a summary. There is this problem that AI addresses for some people, which is that people go to the internet, and they quite rightly want a summary of something.
And instead of getting a summary, they get a list of a lot of people making arguments for something instead of, I just want to know what the thing is first. Or a lot of people selling something saying, "Hey, that problem you have, here's your solution," and people get frustrated with it.
And so part of what we have to do I think is, and this is outside the scope of the book, but we have to come up with a business model for summary to get people the answers they want. Again, it's not a parasitic business model where the summary is coming from a lot of work that people did, but not necessarily giving back or supporting the people that did the work. We're starting to veer off into solving the problems of the world here, but that's okay. I'm up for it.
Guy Kawasaki:
But in my simplistic world, if I did a search, how do I add a HP printer to my Macintosh network, listen, just like on a search engine, if the right column has ads for toner cartridges and HP printers, I'm fine with that.
Mike Caulfield:
Yeah.
Guy Kawasaki:
I want you to answer this question because I don't know how to answer it, which is how do you tell if a large language model is making shit up and having hallucinations?
Mike Caulfield:
How do I tell? Generally, I consult it for something that I have some idea about already and that would be the sorts of things. And very often I'm checking and understanding that I already have when I'm going there. Now that said, you're talking about there's different sorts of knowledge and they're not the same.
You're talking, for example, about procedural knowledge. You want to set something up. There's one nice thing about procedural knowledge, which is assuming you're not operating a nuclear power plant, you try the procedure if it works.
And then if it works, great. That's confirmation. If it doesn't, then you find something else. And so for a lot of procedural knowledge, assuming you're not working with dangerous chemicals or something like that, yeah, I could see someone doing that. They want to know how to do this.
And as a matter of fact, the classic example of that is LLMs are really good at writing code. And if you're trying to write a bunch of computer code to reorganize files on your drive by date and rename them or something like that, make a copy of that before you do it, the LLM can write that.
You can run it. You have some confirmation that the information you got was good. The problem comes, and this is where our book fits into and I'm glad you mentioned this because there was another interview I did and someone was really obsessed with, why do we even need this stuff if I'm just looking up how to set up YouTube TV on my computer or something like that?
And yeah, for that set of things, it's not really a book about that. But there's another set of things where you can't directly verify the knowledge that you're given and that's different.
So someone says, "Look, fentanyl deaths in this country are at all-time high. We need a federal intervention for this," they show you a chart, and maybe that's true. In this case, probably is true. They are. But there's no way for you to directly go out into the world and verify whether that information was true or false.
And for that sort of thing, I wouldn't trust an LLM unless you really know the subject. I would, in that case, try to find something that was directly written by a human, particularly a human that has reputational stake.
Someone who if they don't take care with the truth is likely to pay at least some reputational consequence because that's how we build up trust is we know, look, if this person gets it wrong, at the very least it's an embarrassing next day at work, which is often enough to have people get things right.
Guy Kawasaki:
Therefore, I could make the argument that the fact that LLMs have hallucinations means that Wikipedia still has a place in this world.
Mike Caulfield:
Oh yeah, I would absolutely agree with that. And one of the things about Wikipedia and the way it's structured is editors have stakes. I don't think people understand this, but some people play Wikipedia like a video game in the sense that there's a dashboard there. And when their changes get reverted, it's painful. The flip side of that is sometimes you get into these wars, which you get very emotional about it, but people that work on Wikipedia have reputational stakes.
They have a dashboard that shows how many times they created an article that stayed up, how many times they contributed an edit that stayed there, how many times it was reversed. And these things keep the majority of people in Wikipedia on track. ChatGPT, like the company ChatGPT, has stakes, but the actual thing producing the thing doesn't have any stakes. And that's a big difference.
Guy Kawasaki:
No, not at all. Okay, so let's say that I go to a website and it's got this .org domain. I go to the about page and it talks about ending climate change and making America great again. That's not a good phrase. It looks like it's a legitimate .org, .edu something. And so what tricks do people use to make a site look credible? In your case, it's owned by a political consultant who's trying to foster anti-union voting or something.
Mike Caulfield:
Kill the minimum wage or something like that. Yeah, yeah. Here's the core of what most people do is we talk in our book about cheap signals and expensive signals. And an expensive signal is like your reputation. It takes a lifetime to build a reputation.
You're very careful about your reputation. You have a history with people that you can maybe find online over years. If you're a reporter, you can look at the articles you wrote twenty years ago at The Washington Post and the articles you wrote yesterday at The Guardian.
So there's reputation, that's an expensive signal. And then there's what we call cheap signals, and cheap signals are anything that gives the appearance of authority or expertise or being in a position to know that is relatively cheap to get.
So a classic example of that is .org. The cost of getting a .org is twelve dollars and ninety-five cents on Namecheap and get a .org. But someone might look at that and they might say, "Oh, it's a non-profit organization." So it's a cheap signal. Being a non-profit organization and having a bunch of people that talk about your work over time, that's very expensive.
That takes a long time to cultivate. Buying a .org does not. In a similar way, having a good layout on these sites. There may have been a time where in the 1990s having a good layout to the site, having a crisp look, at the very least it meant that you had some money.
You hired a web developer who could sling that code, get something up, cut it all up in Photoshop, and lay it all out in HTML Dreamweaver or something. It signals something. Maybe not always a lot, but it signaled, look, someone believes in these ideas enough to fund it.
It signals something. Nowadays, it signals nothing. I think most people know this, but in case they don't, you can get a website that looks as good as your average newspaper. Just go to WordPress, pick a template, start typing, and you'll get something. In many cases, it looks cleaner than your average newspaper. Because if you're faking a newspaper, you don't have to run dozens of ads. That's a cheap signal too.
And so the people that want to fool you do is they look at all the things that people look at to get a sense of whether something has a good reputation, and then they look at the ones that they can get done in an hour, get done in two minutes. And they do that, and that's what they use to fool you.
And so whenever you're looking at something, what we encourage people to do is think about how hard would it be to fake that and does that require getting in a time machine and building ten years of relationships or does that involve going to Namecheap and buying a domain name?
And there's a vast difference between the two things. And what we found in our work is that people made no distinctions between those. As a matter of fact, people tended to overvalue the cheap stuff because it was more immediately apparent. They could see it looking at the page, whereas they tend to undervalue the expensive stuff because you had to go out and you had to say, "Hey, if this guy is an expert in this, there's probably at least a newspaper article or two that quotes them as an expert."
So that stuff took a little more effort, just a little bit more effort. But it's so much better evidence than the stuff that's about the surface of the page or the domain name or whether they have an email address you can mail, or whether there's an avatar picture of a real person who might be a real person, might be an AI person, might be some other person that doesn't know their picture's being used.
Guy Kawasaki:
And in this scenario, when you land at some organization's home page, would you also go to Wikipedia and look up that organization?
Mike Caulfield:
Yeah, absolutely. Absolutely. In fact, one of the things we found Wikipedia is best for is telling you what an organization is about. And that doesn't mean telling you whether the organization is true or false. That's like a nonsensical idea. Is the organization true or false? It doesn't even necessarily mean is an organization credible or not? It just means is this the sort of source that I thought it was that I thought I was getting my stuff from? So for example, you mentioned some of these advocacy sites.
You might go to an advocacy site and one person might go to an advocacy site, stopminimumwage.com, and it says, "We're a coalition of restaurant workers just looking to protect our lifestyle with tips, and this bill's going to be horrible for us." And that one person might go to that and be like, okay, I know they're not restaurant workers. I know this is run by a lobbyist firm, but I'm interested in seeing what arguments the lobbyist firm is advancing. And if that's your jam, then great. Go wild.
I want to see what a lobbyist organization thinks. I go to a lobbyist organization page. I find out what the lobbyist organization thinks. Maybe they're making a good argument. Maybe it's something I should think about. But yeah, for most people when they come to something, they think this is a research group or this is a community organization. This is a grassroots organization. I should say again, I don't know, I'm just making names up here. I hope that's not a URL. The idea is you come to that page.
You think it's one thing. You go to the Wikipedia page and it says, "Hey, this organization was originally founded by a coalition of the nuclear energy industry and the coal industry." Okay, maybe they have something interesting to say. I'm not saying their facts are wrong, but it's also maybe not your best first stop for a summary of what our energy future should look like. You might want to go somewhere else.
Guy Kawasaki:
Okay, so now tell me, do you think that one-hundred Twitter employees sitting in Austin will have any impact on Twitter/X?
Mike Caulfield:
I guess it depends on what impact you're thinking here.
Guy Kawasaki:
Impact is a low bar safeguard truth.
Mike Caulfield:
Twitter has placed its eggs in the Community Notes basket, and this is a way that users can add labels to things and rate them and so forth. They say it's inspired by Wikipedia. There's some elements of it that are reminiscent of that. There's some that are not. They've invested less in their trust and safety team.
I just say approach these things with caution. On Twitter/X, I've been advising people to the extent they stay on it, to veer more towards their following tab at this point than therefore you, because that algorithm to me seems like it's more and more tuned to just promote sensational content of a bunch of people that I've never seen before.
Guy Kawasaki:
But Mike, honestly, when you read the news that Elon Musk says, "We're going to get one-hundred people in Austin to address these issues on Twitter," did you or did you not start laughing? This is a yes or no.
Mike Caulfield:
I sighed. Let's say that I sighed. I think if you want to do that at scale, you need to fund it at a better level. I think it's complex. I do think it's complex. I do think that even old Twitter never quite had it right. It's hard thinking about how to do moderation, how to do labeling, how to do context, how to do all these various things.
It's a lot more difficult, I think, than people recognize. You're always looking at these competing goods that you're trying to protect, and you're trying to do that in this fast-paced environment where you're making decisions in the moment.
I think from the perspective of our book, I think for the time being, you're a little bit on your own. I hope we come to a future where context is rightly seen as a core competitive advantage and community feature for any information offering. That we don't see this as something that is an add-on, but we see, look, people are coming to this for information. Information has to be contextualized, and we should compete by providing the best contextual engine for that information.
That means labels, that means a well-staffed team, et cetera. But we're not there yet. I don't think people fully understand that.
Guy Kawasaki:
My solution to this is that by default, a social media's home feed, I.E. the stuff that's flying past you, it should be only the people you have manually followed, because at least that way you can control. If I only want to follow The New York Times, Washington Post, and NPR, I don't want you shoving shit into my feed from Rudy Giuliani and whatever, QAnon and all that. I would pay for that service.
Mike Caulfield:
I also like a platform called Bluesky, and it's got this idea of the customized feed. You opt into a default feed, which is more or less what you're saying here. That everybody in that feed you're following. It has a very simple algorithm. You can understand. The initial Bluesky algorithm was like people you follow in content that got twelve likes. Twelve was the magic number for a while. And then yeah, you could choose other feeds.
If you want to go a little wider, you could go a little wider. If you want people you don't know who are talking about sports teams that you like, but maybe not specifically with a hashtag, you got something that pulls that all together. And so I do think that thinking about the user experience in that way is probably the future there. Right now a lot of platforms is one feed, and on Twitter it's an odd feed.
Guy Kawasaki:
Okay, so next loaded question.
Mike Caulfield:
Okay.
Guy Kawasaki:
What do you make of Facebook blocking searches on Threads about COVID? And they're saying that, oh, you can't search for the word COVID because it's going to lead to disinformation. Honestly, my head is exploding. This is Facebook telling me this.
Mike Caulfield:
Yeah, I don't think it's good, obviously. Generally, you do want people to be able to find the information they need on the platforms that they're on. I think the current policy environment is such and the current political environment is such that there are some subjects that are just a headache to these platforms.
I look at Facebook decisions like that, and what I see is not someone that wants to be like some sort of Orwellian 1984, I see a company that keeps on getting called in front of Congress half the time by Democrats and half the time by Republicans is worried, has a lot of headache.
They're not selling a lot of ads next to COVID information and just would like the headache to go away. But I don't think it's a great solution. It's a great solution if your site was about knitting and a bunch of people were posting about COVID, you might just say, "Look, no more COVID stuff on the knitting site. It's a headache. I don't want to deal with it." But if your site's Facebook, that's different. I don't think it's a great solution.
Guy Kawasaki:
Since we brought up the subject of COVID, so let me ask you, Mike, let's say that one day you wake up and your ears are ringing. Never been ringing before. Now they're ringing. So you, Mike Caulfield, where do you go on the internet to investigate this ringing in your ears?
Mike Caulfield:
I'd probably go to Dr. Google like everyone else. The first search that you'll do will tell you that just as you suspected, it proves you have cancer. And then you got to think about what you did wrong with that search. So yeah, this is the thing. You do a search. You get a set of search results back.
And one of the things that we really encourage people to do is look at that set of search results and think not is this the answer I want, that's not what you want to gauge in, but is the set of sites coming back the sorts of sites I want and are they talking about the things that I actually expect that they'd be talking about? I do joke that when you go to Google and search your symptoms, it always seems like you've got some tragic illness at first, and then it turns out maybe you just have swimmers ear.
But you put in your symptoms, sure, and you execute that search, but then you look at that page. And one of the things you want to do, one of the things that Google has now is these three little vertical dots on each result. And if you're trying to figure out, hey, who on this page might I want to get an answer from, you can click on those dots and you can find out, oh look, this particular center is a community hospital. This particular site I have no information on. I don't know who they are.
This particular site is a well-known site that sells supplements. And so you can browse and you can quite figure out where you want to go. Sometimes what you want to do in that case is you do the search, you find something that seems like it's a good source of information. You check on the vertical dots. You go to that site and maybe you do a search on that site. If you're on the Mayo Clinic or something like that, maybe do an internal search on the Mayo Clinic at the point you find a site that you trust.
And also we talk about in this book, don't give Google tells. Don't give it clues of what you want to hear or what you expect to hear or what you're worried about hearing. Try to be very bland with it. Again, if you type in ears ringing, is it cancer, you're going to get a lot of pages back that tell you, yes, it's cancer. If you type ears ringing common explanations, you're going to maybe get something that might be a better first stop.
Guy Kawasaki:
How about will tumor cure the ringing in my ears?
Mike Caulfield:
Exactly. Exactly. You put those words together. In general, you're more likely to get something back that's going to say that. If you wanted to do that, again, you can cue Google in these ways. Google has a synonym engine now too, so you don't have to be perfect with this. You just try to put words that try to signal to Google the type of answer that you're looking for.
So if you wanted to put would this spice cure cancer and you really wanted to know, you might put something like fact check after it to say, look, I'm not looking for something that says this. I'm looking for something that investigates this, that actually checks this. So you're going to use various we call them bare keywords. Don't get too fancy. There's a whole Google syntax.
I would not bother to learn it because my experience with other people has been they forget it. Invest your time thinking about, look, I have my query. What's a word that's going to signal the sort of genre thing I want back? Is it this spice cures cancer fact check? Is that what you're looking for? Is it this spice cures cancer something else? Why do people think this and that sort of thing? Come up with a keyword that cues Google to give you the answer you want.
Guy Kawasaki:
Wait, I want you to explain this. You're telling me that if I ask a question like that and I add the two words, fact check, I'm going to get a better response?
Mike Caulfield:
If you want to fact check, you're going to more likely get a fact check.
Guy Kawasaki:
Oh man, this is worth the price of admission. I didn't know that.
Mike Caulfield:
And it's not anything built into Google. It's just the fact that two things, one, when you put in fact check, Google has a synonym engine that it runs things through. So it goes, it looks not only for fact check, it looks for things that might be synonyms of fact check and so forth. It comes up with a series of terms that it expands and it sends out there. And if your search has that and fact check, reality check, checking, whatever it is, it goes and it put pages that have that to the top.
Guy Kawasaki:
So what if I said Donald Trump won the election fact check? What would happen?
Mike Caulfield:
You would get a fact check that would say probably, I think, I hope would say, that Donald Trump won the election in 2016 and did not in 2020.
Guy Kawasaki:
Okay. How much credence do you give Google News?
Mike Caulfield:
I used to Google News quite a bit better. It was a little more integrated with the main product, which means that you could jump back and forth. Now, it's an interesting thing right now. I generally find that if you use keywords, these bare keywords in the Google search, the subject you want and write if you want a newspaper article, write like newspaper article and it goes through, does this whole synonym thing, and I find it gives you pretty good results.
If it doesn't, then I do say, okay, if you're not getting what you want, maybe go to Google News. But Google News right now is this hybrid news reading environment in a search engine and a number of other things. There's some good features in it too. I generally stop at Google first, and then I go to Google News if it hasn't worked out.
Guy Kawasaki:
And what about Google Scholar?
Mike Caulfield:
Google Scholar can be really helpful. There's a lot of criticism with Google Scholar. Some of the ways that it calculates citations and everything aren't as perfect as some of the old more manual ways. Certain things in there can be gamed. There's a recent paper out on gaming Google Scholar through a variety means.
So it's not perfect. Again, part of it is in this world of is this what I think it is, if someone says they're a published academic on some subject, go into Google Scholar and saying, "Hey, did this person write anything and did anybody cite it?"
That's pretty good. You can also type in, if you're interested in a journal, type in the name of a journal into Google and type in the words impact factor, which is like a number that people use to measure journal influence. It's not a precise number. It just tells you for every article published, how many times is it cited. You take all the articles in a journal over time and then you look at how many times that that journal was cited. What's that ratio? But yeah, if it has no impact factor, I worry.
Guy Kawasaki:
And will this impact factor, will it show you that there's now scientific journal farms where you pay to get published so that you're cited, and you can cite something. And will Google Scholar tell you that this is a PO box in St. Petersburg that has published 2,000 articles about turmeric and tinnitus?
Mike Caulfield:
Yeah, Google Scholar won't tell you that. I know that they're trying to. As you probably know, every online information service is just a history of a war with some form of spam. And Google Scholar is exactly the same way. There's ways to spam that system and get stuff that look like it has more credibility than it might. But that does tend to be at the margins still. And the other piece is you don't necessarily have to use one method.
One of the things that people have misconstrued about science and scientific articles is, oh, you're going to read one article and it's going to give you the answer, and that's what science is like. Oh, there was a scientific article that showed X. You hear this in the news, kind of follows the same trend. And that's not really how things work. What actually happens is this article seems demonstrated.
This article seems to not. This article pulls together the articles that seem to not and the articles demonstrated into something called a meta-analysis. And you progress with things like this. And people tend to get too caught up on the individual article that proves everything, rather than when I survey this area, what do the bulk of people say? You don't have to agree with the bulk of people.
I'm well-known for disagreeing with the bulk of people many times, but you do got to know what the consensus is. Before you go against the consensus, you got to know what it is. And so we do recommend something we call zooming out, which is rather than jump immediately into, oh, I found this article that shows XYZ, step back and try to say, "Okay, what do the bulk of people say? Can I find an article that summarizes what the research has set up to now?"
Sometimes that place is Wikipedia. One of the things that we have found in talking to academics is as much as academics suspect Wikipedia when they're teaching their students, they think it's a little fishy. You ask an academic going and looking at a new area that is adjacent to theirs, they're trying to flex into a new area and they're trying to understand, what are some of the consensus opinions of this field, they go to Wikipedia sometimes because you're going to get a really clear summary there of what that is.
Guy Kawasaki:
Okay, last question for you. So, I got to tell you, I don't know which way I should ask this because what I'm trying to get at is this. If you're a parent, what would you tell your kids are the best practices for figuring out the truth? But I also could make the case that if you're a kid, what do you tell your parent about how to figure out what's the truth? So just give us a checklist. These are the best practices.
Mike Caulfield:
All right. So you want to know who produced your information, where it came from, and you want to know if there's a claim being made, if there's some assertion being made. You want to know what other people that are what we call in the know that have some more than usual knowledge about the thing. You want to know what they think of that claim. And that's just where you start. You should know where your information came from. You should know what other people have said about that issue.
And if you make sure that you're doing those two things, when you enter a new information domain, you're going to do better. If you don't do that, what happens is you get pulled down this garden path of just never ending evidence. Some people criticize me when I talk about this as the rabbit hole. But the rabbit hole, it's not even whether the rabbit hole is true or not, the conspiracist rabbit hole.
It's that if you find yourself being pulled from piece of evidence to piece of evidence to piece of evidence to piece of evidence, and you're never backing up and you're saying, hey, where did this come from? Who produced it? Number one.
And then two, what's the bigger picture here? What do people in the know say about this? What should I know? You end up endlessly doing what I call evidence foraging, but you're not getting any benefit to it. It's almost compulsive. So what you want to do is you want to think about those two things.
You want to select the stuff you're consuming a little more carefully and intentionally. In the second to the last chapter in the book, we talk about some of that as critical ignoring, figuring out, rather than just drink off the fire hose of the internet, figure out what you want. Figure out what you would think would constitute good evidence, good sources. Invest the effort to figure out if that's what you're doing, or if you're just running from quick hit to quick hit on TikTok.
Guy Kawasaki:
And just for greater practicality and usefulness, when you say go and find out what people in the know or are saying or what's the common understanding of something, where do you get that?
Mike Caulfield:
This is not the end of it, but you probably start with typing in something like history of Israel-Gaza summary, and then looking at that page and saying, okay, according to my own standards, which one of these would be the best?
And one of the things I want to stress is you might want something that's very dry. You might want something that has a little bit of an activist lean, but knowing what you're looking at, choosing that. And so what's happening there is you're in this TikTok loop or you're in this little Twitter doom scroll and you're taking your agency back.
You're stopping and you're saying, okay, where did this come from? If it's not the sort of thing I want when I search Google or go somewhere else, then what do I want? And then usually you need some form of search to get there. But it's about taking your agency back and asking those important questions. And if you don't get good answers to them, deciding to go somewhere else.
Guy Kawasaki:
Okay, listen, I think your book and the work you're doing literally could help save the world. I hope every school teaches a course like this. Wow!
Mike Caulfield:
Yeah, we do too.
Guy Kawasaki:
So that's Mike Caulfield. I hope you learned a few things about sifting through what we hear and see and read online to help us determine what to believe and what not to believe. Don't forget to check out his book called Verified, along with Sam Weinberg, another Remarkable People guest. So honored you go, seeking the truth. I'm Guy Kawasaki. This is Remarkable People. My thanks to the rest of the Remarkable People team.
That would be Shannon Hernandez and Jeff Sieh, sound designers. Sieh, not SIFT, Sieh. And then there's Madisun Nuismer, producer and co-author of Think Remarkable. That's the book you all should read. Tessa Nuismer is our researcher, and then there is Luis Magaña, Alexis Nishimura and Fallon Yates. We are the Remarkable People team, and we are on a mission to make you remarkable. Until next time, mahalo and aloha.