The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science

S2 Ep 9 - Carl Bergstrom on 'Science and Misinformation'

HPS@UniMelb Samara Greenwood and Indigo Keel Season 2 Episode 9

Today's guest is Professor Carl Bergstrom from the University of Washington. Carl has been touring Australia over the last few weeks and we were delighted when he agreed to join us while he was in Melbourne.

Carl works across evolutionary biology, informatics and science studies and has become particularly well-known for his work concerning the spread of misinformation and what we can do about it. Together with his colleague Jevin West, Carl developed a university course named ‘Calling Bullshit: Data Reasoning in a Digital World’, which they have since developed into a best selling book.

In this episode Carl discusses a range of topics including the role institutional norms and incentive structures play in shaping science, the challenges of studying misinformation and why he believes we must urgently turn our collective attention to the study of collective human behaviour if we hope to address our current information crisis.

Transcript of the episode available here: https://www.hpsunimelb.org/post/carl-bergstrom-transcript-s2-ep9

Resources related to the episode:

Carl's Website: https://ctbergstrom.com/

'Calling Bullshit' Website: https://callingbullshit.org/

Guardian Article: https://www.theguardian.com/science/2020/aug/01/carl-bergstrom-people-are-using-data-to-bullshit

Thanks for listening to The HPS Podcast with current producers, Samara Greenwood and Carmelina Contarino. You can find more about us on our blog, website, bluesky, twitter, instagram and facebook feeds. Music by ComaStudio.

This podcast would not be possible without the support of School of Historical and Philosophical Studies at the University of Melbourne.

HPS Podcast | hpsunimelb.org

Samara Greenwood: Hi, Carl. Thank you for joining me on the podcast today.

Carl Bergstrom: It's great to be here.

Samara Greenwood: Now, you were trained in biology and you're a professor of biology, but a lot of your work crosses over into the philosophy and sociology of science. Could you tell us how you came to work in these parts of HPS? Have they been of longstanding interest to you?

[00:00:17] Carl Bergstrom: I mostly got started in the early 2000’s when I was really interested in the economics of scientific publishing. I started studying the way that the economics of scientific journals worked and thinking about what were the different publishing models that people were using and thinking about the way those publishing models affected the kinds of science that we end up sending in and the way that journal hierarchies end up emerging.

That led me to, I guess, start really thinking about these questions fairly deeply. And this brought me to stepping outside of science and looking at science from an external perspective as the subject of inquiry. I think the rest sort of followed from there.

[00:00:55] Samara Greenwood: Yes, and so I'm curious if you've found any significant differences between working in science versus considering how science works from this sort of more broad philosophical or sociological perspective?

[00:01:05] Carl Bergstrom: Well, I guess it's the difference between being the rat and watching it run through the maze, right? It's kind of fun to step out and look at all of that. 

At the same time, even when you're stepping out, you're not stepping out as far as you think. You're still embedded in a system of incentives and ways of thinking and everything else. So, you think you're watching from above, but you're still in your own sort of maze.


[00:01:29] Samara Greenwood: You're still connected into very similar kinds of systems. Excellent. And so, some of your research looks into how social norms and institutions influence the strategies scientists use. Could you tell us a little bit about this particular aspect of your work?

[00:01:43] Carl Bergstrom: Yes. This is kind of the main direction that I've been working in the last five, six years in the ‘science of science’. There's a whole movement that was called ‘The New Economics of Science’ that was founded in around 1990 or so. It was taking economic tools from microeconomic theory, from game theory, from the study of auctions and contests and markets and things like that, and applying these tools to start to understand why scientists do what they do.

A sort of key tenant of that was that scientists - in Philip Kitcher's terms, philosopher of science  Philip Kitcher said – ‘a lot of the philosophy of science is about what epistemically pure actors would do, individuals who just want to understand things and don't have any other concerns or cares. All the real world of science involves epistemically sullied actors.’ 

Which is kind of a fun term, because we are all 'epistemically sullied.' We all have to try to make a living in science if we're going to stay around. So, we have to respond to the various pressures and incentives, targets and everything else that we're given if we want to get hired and promoted and grants and have our papers read and all of this. So, we have these agents that are in fact epistemically sullied. That's all of us.


The new economics of science recognized this, and they said, ‘you know, really key thing to understanding scientific activity is to understand the incentives that individual scientists are facing and how they respond to those incentives and choosing the strategies about what kinds of research questions to ask, how to go about doing that research, what to publish, how to frame what they publish, etc.’

Where do those incentives come from? Well, they come from the norms and institutions of science. So, these are the brick and mortar institutions, funding agencies and things like that. But there are also institutions that are in a sense, customs, like the institution of authorship.

What does authorship constitute? What does authorship order mean? What's the difference between being a first author and a middle author? What constitutes sufficient novelty for publication? All of this kind of stuff is in the scope of the norms and institutions of science. These create the incentives for researchers who then choose these research strategies. Those research strategies then determine what we know about the world, or what we think we know but it's wrong, or the things we don't find out.

And what's so important about that is there's this kind of key connection between, and it's a pretty direct connection, between these norms and institutions of science and then our understanding of the world. The thing about the norms and institutions of science is, while science is, in my view, the greatest human invention of all time, it's also very arbitrary. It's something that culturally evolved out of the way that people did natural philosophy, largely in Western Europe, a few centuries ago, with continuously running journals for a very good fraction of this time, right? It's very culturally contingent.

It's also jury rigged atop the evolved psychology of one particular species of ape, namely us, to get us to co-operate well, to do these things. You know, if you sat down and just thought about it from scratch and had never heard of science, you wouldn't just deduce science as we practice it.

That's both scary, because it makes our knowledge seem arbitrary, but it's also really empowering, because it makes us realize, ‘Hey, if science isn't functioning as efficiently as we would like, there are ways that we could make it function more efficiently.’


That involves perturbations to these norms and institutions and we can make these if we have some kind of theoretically driven, theoretically grounded framework for thinking about how norms and institutions connect up with knowledge, we can start doing theoretically informed science policy instead of sort of ‘gut feeling’ informed science policy, which is all too common.

[00:05:19] Samara Greenwood: When you're thinking about these norms and institutions, are there certain interventions that you're finding potentially more valuable than others?

[00:05:29] Carl Bergstrom: So, there's lots of different questions you can think about, right?

One that's fairly straightforward is some work we've done on grant funding and on grant funding contests. So, in order to get funded to do research, you typically write a research proposal in the US. ‘I'm going to give you a brief overview of this grant.’ You'll often write them to the National Science Foundation or the National Institutes of Health. These are these great big funding bodies.

A typical standard grant might be anywhere from a couple hundred thousand dollars to a couple of million dollars. And the preparation time involved with a grant like this might be 200 hours of work, say a professor working on this and then assisted by various other people putting in additional time.

The thing is - with these grant proposals that you write - first of all they take all this time. Also, your chances of being funded are quite low. Often the fraction that actually gets paid is about 10%. Then when you write these grant proposals, they're not actually useful additions to the corpus of scientific knowledge. They're just these things that get done as part of a contest in order to try to win this prize.

So, what ends up happening is that people in the medical schools in the United States spend up to half of their time writing grants that no one will ever read except for two reviewers who are required to keep them confidential, who read them quickly on an airplane to Bethesda, Maryland to go score the grant. That's a really wasteful thing to do with all of this scientific expertise and time and activity and so on.


So I think there are more efficient ways to allocate grant funding. In our paper we talk about how when the pay lines were about 50 percent, then people were likely to get paid. People weren't having these very intense competitions in what we call 'grantsmanship’, the nuances of making your grants seem more competitive even though it doesn't really matter. And so, people could write down what they wanted to do, send it in, and then they could decide whether they wanted to fund it or not.

A lot of them got funded. It was fairly efficient. People didn't spend a lot of time. But now if people are writing ten grants to get one, and they have to now put more time into each grant because it's a more intense competition, this becomes a tremendous waste. So, there I'd like to see people finding more efficient ways to hand out grant money.

And so, we proved some theorems about how what we call a partial lottery might be more efficient. The idea is you might have, say, three piles. You'd say, ‘Okay, these grants aren't worth funding. These grants have to be funded. These in the middle are fundable. So, we'll fund all the ones that have to be funded and, with the extra money, we'll just pick randomly out of the fundable ones.’ We have some theorems about how this might be a more efficient way to do things and why that would be more efficient.

I also like the idea of grant officers doing more scouting. You could imagine that they would go to meetings, they'd talk to people, they'd hear the best talks by bright, young postdocs, they'd ask people who's doing neat work, who's got great ideas, and then they'd approach those people and say, ‘Hey, we're interested in betting on you. Write one page describing what you're interested in doing, and let's just make sure that's aligned.’ And then they just say, ‘Okay, here's some funding.’ A lot of the philanthropic foundations that fund science already do this and I think it's a very efficient way to do things. You can use economic theory to kind of prove theorems about relative efficiency of these different models and so forth.

That's a maybe a good example of the kind of work we've done.

[00:08:35] Samara Greenwood: Yes, that's really interesting. I'm interested in what you think working scientists, as well as other scholars who have come up against these kinds of things, what they might find useful and practical in your work in this area?

[00:08:46] Carl Bergstrom: A lot of the work I do, I think, helps people better understand the systems that we're part of, and I think it helps us think in constructive ways about how we might choose to be part of changes. So, if the kinds of incentives that are in place lead to people to publishing a lot of very surprising work, that actually is generating a lot of false positives and it isn't very reliable, we could think about, ‘Well, what are good ways to change that? Should we change the threshold for statistical significance? No, that doesn't really work. Okay. What other changes could we do?’ So we can help scientists do that.

I think it's somewhat empowering to understand the system that you are part of. I find people tend to be very interested in learning and thinking about, ‘Oh, that's why we do that. That makes sense. Oh, that's interesting.’

As you understand these things better, it helps you choose your own strategies for how you want to manage your own career and see of all the things that you really don't want to be doing, which ones are essential and which ones actually aren't and are just done by custom.


One thing that's tremendously important about doing this kind of work involves the public understanding of science. So, we really want the public to understand how science works and why it works. And, why it works is not because people sit in a lab by themselves and go through this abductive loop that they teach is this simple Popperian caricature of, ‘I looked at data and then I made a hypothesis and then I tested the hypothesis and then I analyzed the data and then I refined the hypothesis and then I went around and round.’

That's something a single person can do by themselves in a lab without ever interacting. But the whole thing about science is that it's this massively parallel organization of human cognitive effort. The reason it works is because of these social elements and because of the way that these institutions organize this cognitive effort and the incentives that are in place to make sure that if science is heading down the wrong path, there are strong incentives for people to correct that path. These virtues of organized skepticism and all of this.

The more we understand about how science actually works, the better we can explain to the public what it is that we do and why we can actually trust the science instead of sloganeering. Or giving people sort of caricatures of the sort that we teach in high school or even frankly, college.

We could get down to a much deeper understanding of how the social process of science works. We can figure out how to communicate about that. Then I think, ideally, we could live in a society where people have a better understanding of why what we do is valuable.


[00:11:06] Samara Greenwood: I don't know if it's your experience, but it's certainly mine in being immersed in HPS - as you dig through these layers of what really happens in science, a really rich picture emerges, one that's actually more exciting than we've had before, with the sense that there's all these social elements, but they're not arbitrary, right? They work to produce something really interesting and exciting. And the people that do it are very exciting in themselves. And I think that can sometimes be lost as well.

[00:11:31] Carl Bergstrom: That's a great point. Also, I'm super interested in how collective cognition works and how collective behavior works and how collective decision-making works. And science offers us this, while not completely perfect, profoundly successful model about how cognitive effort can be organized on massive scales where you have literally thousands of very, very smart people working together in concert to do something that no small group of them could have done by themselves.

[00:11:57] Samara Greenwood: Yes, definitely. Just changing topics slightly, you're also well known for your work on disinformation, which is, of course, such a growing area of concern today, but also a very complex research problem. So, I'm interested, what do you think are some of the most useful ways to study disinformation today or, some of the key challenges?

[00:12:15] Carl Bergstrom: It's very hard, right? We founded a research centre at the University of Washington in 2019 called The Centre for an Informed Public, and it's this interdisciplinary centre with faculty from a dozen departments around campus; computer science, law, communications, psychology, economic, you know a whole range of things. One of the major challenges that we face is understanding disinformation, and misinformation as well. So, the difference being that misinformation is false, but not necessarily deliberately so. Disinformation is either constructed or spread in order to mislead people.

One of the places we've focused, because it's easy to collect data, is on social media. Also, because it's really, really important in current society, a major information source for people. The other very nice thing about social media is that you can study what people are the passive recipients of, but then you can study their own actions and their own behaviors. I mean, I could study what's being put on broadcast television, but it's very hard to study people's responses. Whereas in social media, it's all wrapped into one package. I can look not only at what information people are seeing, but of that information, what do they choose to share? That's really powerful.

But this is becoming more and more challenging. Social media companies are becoming extremely reluctant to allow any access to the information and data about what's being spread on those platforms. Twitter has essentially shut down access for academics. Facebook's always been restrictive and didn't provide as much as they had once intended to. Other platforms are entirely dark by design, so that you just can't know what's going on on these platforms. This is a major conduit of misinformation and disinformation in the world, and we have essentially no access to measuring it. 

Our one option is to collaborate with the tech companies, but that gives them a tremendous amount of control over the research agenda and the directions of the research agenda. So even if they're not saying, ‘fake this study.’ They are saying, ‘let's look at the places where individual users are making things worse.’ They're not saying, ‘Let's look at how our algorithm is making things terrible,’ right?


I think all of these are some of the most prominent challenges right now. The other prominent challenge in the US is that it's become extremely politicized. If you're studying misinformation, you become under fire essentially from the American right.

There's this perception, it's a very inaccurate perception, but there's a perception that universities that are studying misinformation are actually doing it as a secret way to silence or censor conservative voices. While this is inaccurate it seems to continue to have legs. So, people come under a lot of fire and pressure and spurious lawsuits and all of this kinds of things.

We spend a lot of time dealing with these kinds of challenges as well, and that's a real shame because it was something that, two, three years ago, universities were realizing was actually an essential research area that they needed to have on campus, and now it's become so politically charged that people are getting cold feet.

[00:15:02] Samara Greenwood: I know you also look at ways of dealing with misinformation. Maybe that's a slightly easier thing, even if we don't quite know all the ways it works. I'm interested, what do you consider to be some of the best strategies?

[00:15:12] Carl Bergstrom: I think there are a bunch of different ways that a society can deal with misinformation. I tend to like strategies that are more bottom up, let's not have, you know, a central ministry that decides what you can or can't say because I just don't believe in that.

Ways that are more bottom up, I think, are really, really good. One of the most powerful bottom-up ways to think about misinformation and to combat it is through education. Just teaching people to understand the information environments that they're in now. Because our school system still is based on teaching people to operate in information environments that are largely print based. That's completely different than what we face. So, basic online literacy, digital literacy, critical thinking.

I think data literacy is enormously important. This is where I've put a lot of my own chips. It's where I'm trained. It's where I have some credibility as a professor. So, at the University of Washington, Jevin West, a colleague and I, have been teaching this course called, I think I can say this on your podcast, called ‘Calling Bullshit’. The course is teaching people how to push back against the way that data are used as a form of power to sell particular narratives. Everything's so quantified these days and decisions are supposed to be data driven and then we have these ideas that, you know, the data don't lie.

But of course, people can fashion data into any narrative they want to and a lot of us are reluctant to push back because data seems… they're not opinions. Also, the way they're analyzed feels arcane and technical. ‘I don't remember how this regression works, or I never learned it, or I don't know what that algorithm is.’

So we often just let the people who have the data tell us what we ought to be doing. That's a big mistake because they can tell us any story they want. So what the course is about - and we wrote a book by the same name Calling Bullshit that that lays this out for a broader audience - this is really all about empowering people to say, ‘Hey, look, you don't need a master's degree in data science or statistics to look at these data, evaluate this person's claim, ask the right questions, push back if this is incredible.’ You continue to be part of a social discourse in a world where data becomes so much more important.

I think that kind of education is tremendously important to help people deal with these kinds of false data driven narratives. There's a lot of other things we could do but that's been where I've put a lot of my chips.

[00:17:31] Samara Greenwood: I'm interested, what kind of feedback do you get from students of the course? Do they find that lots of interesting insights come through from that?

[00:17:37] Carl Bergstrom: The students love it. It's been enormously popular. When, when we first taught the course registration opened to seniors at midnight, and it filled up in under a minute, 180 seats. It was the fastest anything had ever filled at the university. Students continue to love it. It continues to fill up very quickly.

We teach 180 students every year, and I think what's really exciting for the students, partly, is they feel really empowered, which is our whole aim. They start to recognize, not that everything is false, they don't become nihilist. It's very important for us to help people see that they can get to the truth of things, and they can distinguish between good and bad information, not just to think that it's all bad.


But it's really fun. A few weeks into the course, I'll get in and I'm trying to set up my lecture and it happens every year, students will be coming up to me and they'll say, ‘Hey, I know you're trying to set up your slides, but last night I was reading The New York Times and I found this and isn't this that fallacy of selection bias that we talked about?’ Then I'll look at it and say, ‘Yeah, that is,’ they'll say, ‘I knew it. That's so cool. Like, how could they do that?’

It's really feeling empowered to spot this stuff. 

So much of the class is really about creating habits of mind, just having a good bullshit detector, right?


And that you can't just say, ‘Oh, it's The New York Times so everything in it's true.’ Because unfortunately, for better or for worse, whether by design or by ignorance, there's a lot of misleading statistical claims that get made there and anywhere else. Again, you don't have to be a statistician to spot them.

Just like our students get really excited about that, the other thing they get really excited about is - we put them in the position of the person trying to spin the story in a particular direction using data. We say, ‘Here's a data set. Now, one group - from this data set - argue that the US has the best healthcare system in the world. This other group argue that it's a complete flop.’ And they see how easy it is to spin that story using data and data graphics.

They come back and they say, you know, ‘Professor Bergstrom, is it okay to throw out Canada?’ I'll say, ‘Your job is to try to make an argument that the US is great. If Canada's getting in your way, come up with a silly reason to throw it out.’


They think that's really funny and then they'll throw it out and then they'll come up with this argument. Then they'll say, ‘This argument is completely wrong, but it looks really convincing.’ And then I'll be like, ‘Yes, exactly, that's the point.’ They say like, ‘But we're reading this stuff in the news all the time.’ It's like, ‘Yes, that's the point.’ I think that realization people find really exciting.

And again, it's empowering, right? To understand that you can spot it and you can start to sort among your media sources and pick the ones that you don't think are doing this to you very much.

[00:20:04] Samara Greenwood: Yes, excellent point. Okay, for my last question, in your lecture earlier today, which I attended and which was fabulous, you said that you believe that the study of collective human behavior should be prioritized as a crisis discipline in today's environment.

I'd love if you could tell our listeners a little bit about what you mean by this.

[00:20:22] Carl Bergstrom: Yes, there's a lot of unpacking to do there because we have to talk about what collective human behavior is and we have to talk about what a crisis discipline is.

So, collective behavior is the study of how any organism does things in groups. Of course, organisms do a lot of information processing themselves. They get sensations about the environment and make decisions. But a lot of organisms make decisions in groups as well. So, a school of fish is processing information. Every individual fish is processing information, but they're also interacting with each other, watching each other's movements, maybe signalling in various ways.

As a collective, they end up being better at processing information than any individual fish. This comes back to ‘science’, like we were talking about. That's also collective behavior, and it's a great example of how you organize collective cognitive effort. We can do things collectively that no individual could do.

You know, even if somebody understands every element of how the vaccine design worked, they probably don't understand the elements of how the computer worked that they used to design the protein folding or whatever, right? It is this massively parallel activity. Humans do a ton of this. We're a species that spends a lot of time acquiring information. We're information foragers. We do a lot of this in a communal way by talking, by gossiping, by reading and writing and communicating and signalling.

So, there's a whole flow of information that goes through human societies on a continual basis. We use that information to make decisions and then - if we all have different preferences - we aggregate those decisions in various ways. Understanding how that information flow links up to the decisions that groups end up making is very important and largely unsolved.


I think that's a really important problem in society, especially in light of the fact that we've radically changed the way in which information travels. Until 20, 30 years ago, there was nothing like an internet in which we were all participants, constantly broadcasting our thoughts, constantly choosing what information we think is worth reading for our peers by forwarding things on social media and all of that.

And so, the whole nature of information flows among human societies has been upended in the last 30 years. That means that people's beliefs are now formed in very, very different ways that are results of complex dynamical processes. The collective decisions that we make are now happening in very, very different ways.

Some, though not all, of the things that have been happening in society over the last 20 years and the increasing polarization that we see and some of the various trends in political and social thought may be driven quite substantially by the affordances of these technological platforms on which we're now sharing most of our information, but we don't really know how. We don't have a theory of that in any substantial details. That's human collective behavior.

A crisis discipline is the applied analogue to some basic science field. It's the ‘conservation biology’ to the field of ‘ecology’, it's ‘climate science’ to the field of ‘atmospheric science’ or something like this. Not only is it applied, but it's dealing with a system that is in crisis. So, it's a system that is dysregulated or dysfunctional. It's a system that's probably degrading and getting worse. It's a system where we don't have time to step back and study it for 30 years, and then come do something about it.

We find ourselves in a crisis discipline in this very awkward position, where the system is falling apart in ways that we don't like, or going in ways we don't like. We don't have a good theory of how that complex system operates, and yet we can't afford to wait. We've got to act now.


I think that this is very much happening in our global information systems. In order to solve any of these other crises that we worry about, you know, everything from climate change to pandemics to racism to war, we have to have access to good information so that people can make good decisions for everybody's collective benefit. If we don't, we're going to fail across the board with these things. So, we can't afford to step back, study this for 30 years and then say, ‘Okay, now we think we should make social network platforms that look this way.’

We've got to be intervening now and so this is requiring a very different kind of approach to science. A very different kind of applied science that's very experimental, that has to be made on best guesses around uncertainties, probably has to embrace a diversity of different approaches to see what works. It has to be radically transdisciplinary in the sense that it has to bring in people from every department on a college campus. You know, people thinking about the legality, people thinking about the ethics, people thinking about the psychology, people thinking about the technology, all of these different components.

We have to be talking to each other and trying to figure out what things can we try toward creating information systems that are better at transmitting accurate information about the world that supports human flourishing instead of transmitting dishonest, disingenuous information that basically serves to attract people's attention and ultimately is being fed to us because it convinces us to continue to click on websites and drive ad sales to the people that provide them.

[00:25:28] Samara Greenwood: Absolutely. That sounds like a very good movement if we can get that going.

[00:25:33] Carl Bergstrom: It's a very big ask but I think it's a very, very important problem. We've gotten some traction with the UN to start studying this problem. To work with us perhaps on this issue of actually providing access to information. An information gathering capacity so we can better understand what's happening on social media, who's seeing misinformation, how much of it is there, how does it spread? 

But there are so many different pieces to this puzzle and I think the kind of thing we ought to be doing on university campuses is coming together across these disciplinary boundaries and trying to trying to solve it before it's too late.


[00:26:07] Samara Greenwood: Thank you so much. That's an excellent spot to finish. Thank you so much for agreeing to be on the podcast. It's been great talking with you today, Carl.

[00:26:13] Carl Bergstrom: I had a really nice time. Thank you.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The P-Value Podcast Artwork

The P-Value Podcast

Rachael Brown
Let's Talk SciComm Artwork

Let's Talk SciComm

Unimelb SciComm
Time to Eat the Dogs Artwork

Time to Eat the Dogs

Michael Robinson: historian of science and exploration
Nullius in Verba Artwork

Nullius in Verba

Smriti Mehta and Daniël Lakens
Narrative Now Artwork

Narrative Now

Narrative Now
On Humans Artwork

On Humans

Ilari Mäkelä
Simplifying Complexity Artwork

Simplifying Complexity

Sean Brady from Brady Heywood
Working Fathers Artwork

Working Fathers

University of Melbourne