The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science

S5 E4 - Heather Douglas on Rethinking Science’s Social Contract

HPSUniMelb.org Season 5 Episode 4

This week on The HPS Podcast, Thomas Spiteri is in conversation with internationally recognised philosopher of science and professor at Michigan State University, Heather Douglas. Heather’s work has transformed how philosophers and scientists think about values, responsibility, and the relationship between science and society.

In recognition of her contributions, she has been honoured as a Fellow of the American Association for the Advancement of Science (AAAS) and the Institute for Science, Society, and Policy at the University of Ottawa, and has held senior fellowships at the Center for Philosophy of Science at the University of Pittsburgh and, most recently, with the SOCRATES Group at Leibniz Universität Hannover.

In this episode, Douglas:

  • Shares her intellectual journey, from early interdisciplinary studies to her philosophical work on scientific responsibility, values, and policy
  • Explains how the twentieth-century “social contract” for science emerged—shaping the distinction between basic and applied research, determining how science is funded, and insulating scientists from broader social accountability
  • Examines the enduring appeal of the “value-free ideal” and why this model is increasingly challenged by contemporary social and ethical realities
  • Discusses the pressures that have exposed the limitations of the old social contract for science, including Cold War funding dynamics, issues of public trust, and debates over dual-use research
  • Sets out her vision for a new social contract for science—one that recognises the unavoidable role of values in research, makes public trust and inclusivity central, and supports scientists through stronger institutional structures
  • Offers practical proposals for reforming science funding, governance, and accountability — arguing that only a more open, responsive, and democratically engaged science can meet the challenges of the twenty-first century

Relevant Links:

Transcript coming soon.

Thanks for listening to The HPS Podcast. You can find more about us on our website, Bluesky, Instagram and Facebook feeds.

This podcast would not be possible without the support of School of Historical and Philosophical Studies at the University of Melbourne and the Hansen Little Public Humanities Grant scheme.

Music by ComaStudio.
Website HPS Podcast | hpsunimelb.org

Welcome to the HPS Podcast. A podcast where we chat all things history, philosophy, and social studies of science. Today I am thrilled to be joined by Heather Douglas, a leading philosopher of science whose work has had a profound impact on how we understand the ethical responsibilities of scientific practice. Her research has played a central role in shaping international conversations about values and science, public trust, and the relationship between science and society.

Heather is a professor in the Department of Philosophy at Michigan State University. She was recognised as a fellow of the American Association for the Advancement of Science in 2016 and is also a fellow of the Institute for Science, Society and Policy at the University of Ottawa. Heather has also held senior fellowships at the Centre for Philosophy of Science at the University of Pittsburgh, and most recently with the Socrates Group at Leibniz University Hanover.

She currently serves on the US National Academies of Science Consensus Study Committee revising On Being a Scientist, an updated and online guide to responsible and ethical conduct of research.

In this episode, we explore the origins and legacy of the old social contract for science, a tacit agreement that defined science's place in society, separated basic from applied research and allowed scientists to remain largely insulated from broader social and ethical responsibility. Drawing on both historical examples and contemporary challenges, Heather explains why this contract rooted in the value free ideal can no longer address the realities of today's complex world.

She then turns toward a new social contract, and shares practical suggestions for reforming funding structures, supporting scientists as they navigate ethical uncertainty and for making science more inclusive and democratically legitimate. 

____________________

Thomas Spiteri: Heather, thank you so much for joining us today. Can you tell us about your path into the history and philosophy of science? What first drew you to the field?

Heather Douglas: When I was an undergraduate, I was studying the concept of time through physics, philosophy, literature, and history. Partly because I really couldn't choose between physics, philosophy, literature, and history at the time.

So, I created this special undergraduate program at the University of Delaware, which allowed students to do that. It was a Dean scholar kind of program, and I got to do all of it. I did not get anywhere understanding time, just so you know, like no insights. But in that process, I took a course from Sandra Harding at the University of Delaware my senior year, and I was struggling to figure out where to go to grad school because I didn't really want to give up the science or the philosophy or the history; and she's like, you know, there are history and philosophy of science programs. 

I thought, what? I can keep doing all of them together! Oh, it was so perfect. I also knew by that point in time that I loved physics, but I was really terrible in the lab. I didn't have the fiddly mechanical abilities you really needed to make an experiment work. I didn't have the patience to work with the equipment. I wanted to have the theoretical insights. So, it was always more the theoretical, philosophical aspects of physics that interested me. I applied to history and philosophy of science programs, and chose between Indiana and Pittsburgh. I chose Pittsburgh because Indiana looks like Delaware with hills, and I didn't want to do that again! So, I went to Pittsburgh because it was in a city, and I hadn't lived in a city before.

Thomas Spiteri: And how was Pittsburgh? 

Heather Douglas: It was fantastic. It was a fascinating city. I still love that city, it's so historically interesting; so architecturally interesting; culturally really interesting.

The HPS program at first was a big shock. I had read Kuhn in a different class my senior year, and I thought, oh, that was my first philosophy of science text. I thought that was philosophy of science. Coming in and reading Hempel and Carnap with Wesley Salmon was a bit of a shock. That was: "What! This is philosophy of science? Okay."

I also had been studying feminist philosophy of science with Sandra Harding, and that was not on the agenda. At first, I was like, what am I doing here? I was really interested in the history side of the history of science program, drawn more to that at first, and that was really wonderful. Took a lot of history classes, ended up taking courses in the rhetoric of science program one floor up to get the broader picture of things.

I was there when Tamara Horowitz taught the first feminist philosophy of science class in the department and discovered with us that there actually was something there to it, it wasn't just ideology. I ended up finishing my PhD there.

Thomas Spiteri: You've become a leading voice on the role of values and responsibility in science and policy. How did your interest in these questions begin and what set you on this path? 

Heather Douglas: I was trying to figure out what to write my dissertation on, and initially I was going to write on early 20th century experimental physics and what constituted a good run in physics and how that judgment was made. But I had a bit of a crisis about that because I thought, wow, you know, like maybe a dozen people in the world would care about this. I thought what am I doing pursuing something that doesn't really matter to the general world. It seemed problematic to me at the time. I took some time off and worked for an NGO in Pittsburgh, which was great.

It was during that time that I met someone who worked for the Pennsylvania Utility Commission, and he started talking to me about risk analysis, where science and policy met. I thought, wait, there's a place where science and policy meet? That I could study? That's very exciting! Why don't I do that? I talked to Peter Machamer, the chair of the department at the time, and I said, I want to come back and finish, but I want to work on science and policy and do it through the lens of risk analysis. He was like, okay, we haven't done that before, but we'll do it, we'll figure it out. I thought, fantastic!

Looking at risk analysis, one of the key issues seemed to be how to handle expert disagreement in the policy making process. I decided to write a dissertation trying to figure out why experts disagreed, and I wanted to pick a case where there had been a fair amount of studies done. Where there was a good amount of empirical information and there was enough time that you would think consensus would've formed. Looking at previous historical cases, give it three to four decades, consensus emerges. That hadn't happened. I got really interested in that kind of case, and I was going to do it on climate change, but I realised the evidential basis of climate change was really complicated and that it would take three or four dissertations to get to the point where I could see the sources of expert disagreement. 

I picked dioxin because that was a really well studied case. There were very robust data sets from animal studies, from biochemistry, from epidemiological studies studying industrial accidents, and still experts were several orders of magnitude apart on what constituted a dangerous dose. I thought this is perfect, I can really see what's going on. In addition, European standards were more lax than the US standards, which was the inverse of what one would expect because of the precautionary principle. I thought here is a really interesting case. I started looking at dioxin studies and wrote about dioxin, even though I had no background in biology and just had to learn all the science as I went, but that was fine. 

Thomas Spiteri: Today we will be focusing on a more recent paper that you co-authored with T.Y. Branch about the social contract for science and the value free ideal. For those who might be new to this concept, could you explain first what is meant by the social contract for science? What does that term mean?

Heather Douglas: The social contract for science is a term that has been used by science policy folks for decades. What they mean by that is a tacit agreement between the scientific community and the society that supports the scientific community. 

It's not something that representatives of science and representatives of government ever sat down and signed a document. It's instead the agreement of what the place and role of science is in the society, how the society will support the scientific endeavour in that society, why science is supported within that society. The agreement that we currently have, arose in the mid 20th century context. That's what the paper is about.

Thomas Spiteri: The value free ideal, you both argue, didn't just arise in isolation. It was linked to a broader social contract after World War II. Can you tell us what this arrangement looked like and how it shaped both science policy and philosophical thinking at the time? 

Heather Douglas: Yeah. I want to say a little bit more about like where the social contract came from. In my view, the social contract consists of three components. The first is a distinction between basic and applied research. This was very important for members of the Society for Freedom and Science, which formed after J.D. Bernal's 1939 work, The Social Function of Science. Bernal was a Marxist and was following a lot of thinking about science in the Soviet Union where science was supposed to be expressly aimed at public good. He called on scientists to work with governments to together collaboratively develop science expressly for the public good. This was an anathema to folks like Michael Polanyi and John Baker and folks in the US like Percy Bridgeman, who thought: no, it's not the job of scientists to aim at the public good, that is someone else's job, we're not ever sure whose job that is, but it's someone else's job, and the job of a scientist is solely to pursue truth for its own sake.

Crucial to that is this distinction between basic and applied research, because of course people are doing applied research. They can have those social goals, they're applying things. But when you're pursuing basic or their earlier term, pure research, your goal is solely truth for its own sake, and that is what scientists primarily should be doing, that is the calling of the scientist. So, there's the basic applied distinction; there's the idea that if you're pursuing basic science, all you should be thinking about is truth, it's not your job to worry about the impact of your work on society. There's an absolution from the usual responsibilities we have, to think about the impact of our work on society. Because we're human beings our intentions matter, but so does worrying about whether or not we're reckless or negligent. Then there's also the component that that's the science that expressly needs public support because it is the basic science, the science that's too early to actually have societal impact that commercial entities will not fund. 

Prior to my recent formulation, the social contract, the social contract was thought to be in terms of public money for basic research: basic research will eventually to public goods, that kind of transactional thing. But I noticed this additional responsibility component was crucial for people like Polyani and Bridgeman, and that they articulated it very strongly – and that those structures actually shaped science policy throughout the 20th century in bizarre and amazing ways. The value free deal, if you don't think you are supposed to think about the impact of your work on society when you're doing basic research, there's no social and ethical values in your research. It's just epistemic. So of course it's value free, that's like a consequence of that move.

Thomas Spiteri: Can you say more about why the value free ideal was so compelling? It seems quite obvious to say that there are values in the work scientists do, and were doing , especially if there was an applied component. Why was it so compelling ?

Heather Douglas: I think it was compelling for two reasons. One is the sort of absolution from responsibility: 'it's not my job to think about any potential impact in my work on society, and so social and ethical values are irrelevant'. But that was made particularly attractive in an anti-communist culture that was gripping the US by 1948 with the onset of the Cold War and strengthening under McCarthyism because having values be part of science was very Marxist. Those who are arguing for that view were most often Marxists, and one wanted to place as much distance as possible in the US between Marxist views and one's own view. Further, if your work is value free, it can't possibly be communist, right? It's the get out of McCarthy-ite view free card, right? Your work becomes absolved of political implications if your work is operating under this kind of social contract and the value free ideal. That makes it really a safe haven for philosophers and scientists alike.

Thomas Spiteri: The value free ideal seems to still be very much present. Why do you think it's so difficult to move beyond this characterisation of science? Is it just a hangover from that fear that you described, the Red Scare? Or are there deeper reasons? 

Heather Douglas: I used to think it was the value free ideal was really the problem, but now I think it was the social contract for science that produced the value free deal.

The social contract for science has only recently really begun to break down, and we're just at the moment of trying to reckon with what needs to replace it. We do need a new social contract, and it needs to have certain functions. Until we have a new social contract in place, if you are still thinking in terms of: first we fund basic research that produces applications; applications are done by somebody else; I'm doing basic research, it's too far away for me to even see what the implications of my work might be, surely my work is value free – if you're thinking that way about how science functions, it's very hard to see how you can embrace responsibility without damaging scientific integrity, which is a worry. It's very hard to see how you could fund science. What are the rationales for funding science? Within the context of the old social contract, all these things have answers.

What are the answers in our new context? I think until we grapple with that, the value free ideal will still seem like the safe thing to do because then, science is outside of politics, it's in its own safe sphere, we can keep it safe there, and everything can just keep working. Until we have a new social contract, everything feels very risky by opening up the discussion of responsibility and values. 

Thomas Spiteri: At what point did that social contract for science begin to break down, and what kind of pressures – social, political, environmental – contributed to the erosion of that model?

Heather Douglas: Some of them started as early as the 1960s, particularly in the idea that you needed to fund basic research in order to get to applied. I'll tell you all a funny story. The military came to believe the linear model: first you fund basic, then you get to applied research – to such an extent by the 1950s and early 1960s, they were beginning to require all their military contractors to have basic research labs. Imagine if you want to build things for the US military, you have to have a basic research lab as part of your staff to ensure that the continual insights of science keep happening. It becomes such a problem that a lot of academic departments, we can't compete with all these basic research labs that are popping up. We can't even hire new physicists anymore, and at the same time, it's getting more and more expensive.

The military actually funded a project to try to figure out whether or not basic research really is necessary for applied. They looked a dozen military technologies over the past 20 years and they figured out the vast majority of insights came from applied research and engineering, and that basic research is important, but only for decades past. The military then was actually; we don't need these basic research labs anymore! In fact, by the late sixties, Congress had banned the military from funding things that were not directly relevant; it was considered a huge waste of money.

That began to erode the linear model a bit. In the 1970s, we had various crises about whether or not science was actually beneficial. People had worried about science producing dangerous weapons in World War I with poison gas, World War II with the atomic bomb. In the Vietnam War you still have Agent Orange, but by the early seventies you have concern over non-weaponised things like DDT and PCBs and the thalidomide crisis, and all kinds of scientific products that had caused environmental, and physical, and social harms, increasing worries about studies of race and IQ, which we still, depressingly, have with us. That was raised in the 1970s and so people began to wonder, hey, maybe we really do need to try to steer science towards more socially beneficial ends. The scientific community though, really pushed back against those ideas, made it optional: like, if you want to, you can go join Pugwash and work for science as a social good, or go work on advisory committees and work on science for a social good. If you're doing basic research still you get the free pass.

It wasn't until concerns about dual use research arose after the 9/11 attacks and the anthrax attacks that fall, that the social contract really finally breaks. That's because with the growing concern about dual use research, that research could be weaponiseable. Not because a scientist intends it, but because they discover something that another malicious actor could easily use for malicious purposes – and the need to worry about that. In the 2000-odds, people tried to circumscribe when those concerns would arise, but science is a process of discovery, and you never know when it's going to arise. 

By around 2010, scientists have realised, oh, there's no way to sequester this. It doesn't belong to a field, it's any field; hopefully not your field, but it could be your field at some time, so you have to worry about how malicious actors might use your work. That suggests scientists always have to be responsible. You get statements by the AAAS, by the International Science Council, saying scientists always are responsible for the impact of their work on society, that with freedom comes responsibility. Now that pillar is down. The linear model was already declining. The basic applied distinction, oddly enough, it's still the bedrock of OECD science policy statistics. Scientists still think about it, but nobody believes it. If you ask people to define what distinguishes basic from applied research, it's a mess – and it's been a mess for 50 years. We're in this space where it doesn't work. The old contract doesn't work. That's when things get unsettled.

Thomas Spiteri: If the old model no longer serves us, what would a revised social contract for science include? How can it protect both the role of values and scientific integrity? 

Heather Douglas: Yeah, exactly. This is the thing that I've been trying to work on for the past four or five years, and I think I'm making some headway. One of the key things, I think one of the reasons why Percy Bridgeman argued that scientists shouldn't have to worry about the impacts of their work, is that he thought it was too much, too demanding. There is a sense in which scientists aren't responsible for all impacts. They're only responsible for the foreseeable impacts, right? We have that circumscription on what responsibility requires. But there's also this worry about protecting science from politics.

The Society for Freedom and Science that Bridgeman and Polanyi were involved with, we're looking at the case of Lysenko. We're looking at the way in which Stalin imprisoned and killed scientists who didn't pursue the kind of science that he thought they should be pursuing, and they were horrified – rightly. This is a very dangerous case. People who were thinking about these things wanted science to have some protection from political forces and political power. I think that is a serious concern. 

I've been trying to figure out what constitutes illegitimate uses of political power on science – what I've called politicisation of science, because there are legitimate political influences on science. We have laws that are passed by politicians that require certain practices of scientists, like getting informed consent or meeting biosafety levels to work with particular kinds of viruses, or it being illegal to deliberately pursue bioweapons. Those are political limitations on science. We also have political decisions about the public funding of science. The purse isn't infinite, so there are legitimate political influences on science. When is it illegitimate? I wrote a paper a few years ago, differentiating scientific inquiry from politics, that tries to lay out ways in which science gets politicised by importing a democratic norm into science improperly where it doesn't fit, where it doesn't line up. Science and democracy have a lot of parallels as Merton noticed, but it's not the same thing in science and democracy. The parallels are rough, and when you get to the fine grain level, I think they're really important differences. Moving from a political space to a scientific inquiry space, I still want there to be a distinctive inquiry space where the norms are distinctive and different. 

How do we protect scientific inquiry from politics? Make sure that any limits and restrictions on science are very clear. One of the things we're seeing right now with the Trump administration, or we're seeing in the spring, is they're beginning to create new sorts of policies for science, but a lot of them are unfortunately very vague. If you're doing science advisory body in the US, you have to now adequately consider alternative perspectives. What constitutes adequate consideration alternative perspectives? It's actually a really vague standard, and with a vague standard you can then punish those you disagree with for failing the standard and not punish others who are also failing your vague standard. 

Now you've opened the space up to political punishment because standards can be imposed unequally. Deeply worrisome. It terrifies scientists working in these positions because they know there's a standard, that they don't know what it means, and it could be used to hurt them. That's a way to politicise science, and that is a deep threat to scientific integrity. Then you have scientists who might say we actually really think the evidence says X, but if we say that, boy, we're going to piss off our political overseers, who now could use this vague standard as an excuse to punish us if we say something they don't like. Let's say it says Y, even though that's not what we think; because it's safer. Scientific integrity is now out the door. 

It's these sorts of structures that really need to be very carefully thought through: what is the relationship between a political system and your scientific culture? What is the space that needs to be protected, and how are we going to do that by keeping science from being politicised by thinking through the responsibilities more carefully, so that we actually support scientists in actually achieving responsibility. 

Right now, a lot of responsible kind of research training is actually about compliance training. That's about accountability, not responsibility. I think making a distinction between accountability and responsibility is conceptually incredibly important here. There is a lot of scientific responsibilities that shouldn't have accountability structures with them because accountability structures are how you punish people. If you have responsibilities that are also vague and unclear... I think scientists should not make the world worse; that's the minimum floor for doing good work. 

We all can agree that there will be disagreements about what science makes the world worse, because we might disagree about what constitutes worse. If that's a responsibility standard, it absolutely cannot have accountability structures tied to it because those will be deployed unfairly and unequally with political power. That will politicise science and damage scientific integrity. If we don't have accountability for that, how do we help scientists? What kind of assistance do they need? What kind of institutional structures do they need to help them be responsible without holding them accountable? Those are the kinds of questions that I think we need to tackle, and the first step for that is to have an actual place for them to go for help. Right now, most institutions don't really have clear places for scientists to go for help. When they think: oh no, I might have been doing something that could make the world worse; what do I do with that? Who do I talk to about that? How do I try to keep that from happening?

Thomas Spiteri: How do you see this shift playing out in practical terms, especially when it comes to research funding and setting priorities?

Heather Douglas: I think we should really rethink our funding structures. Let's just start with the money. The science is really worrying about the money, where their money's going to come from. I think instead of thinking about basic versus applied research, I'm starting to think about six different kinds of research that we actually should be thinking about. 

From big science infrastructure to mission directed research: where scientists are put in teams and directed to solve problems. To engage public research, where scientists are working with segments of the public. To curiosity-based research where scientists are just trying to figure something out 'because I think it would be cool to do it'. To regulatory research, which are the studies that are actually used to make regulatory decisions. To private interest research. I don't think that these things are ordered or disciplinary captured. I think these six different kinds exist across fields and can occur in any temporal ordering, but they should be funded differently.

Curiosity based research I think should be funded mostly by lotteries because we don't know what's going to work. There should be probably smaller grants, and we should distribute funds more widely, and we should do so randomly. We should probably have screens in place but be pretty minimal screens: like screen out work that's aiming for new bioweapons, definitely. Screen out work that is trying to rehabilitate an old theory that we are pretty sure is dead, like perpetual motion machines, absolute quackery. Screen out everything else, anything that's remotely plausible and not clearly violating our limits – like not working with smallpox in biosafety levels. 

It all goes in the lottery. If you're trying to figure something out, that's where your research goes and good luck. It also would reduce grant proposal demands because you wouldn't have to make all kinds of crazy promises about how your research will eventually save the world when what you really want to do is just figure out what a particular biochemical pathway is because you think it's really interesting. Once your research proposals in the lottery, it can stay there for a couple of rounds, at least. Does it get drawn right away? It's not like you have to reapply; it just stays there. 

Engaged public research; you need to have segments of the public that you're collaborating with, and they need to be part of the proposal process and be part of the evaluation process. That requires an interdisciplinary panel of assessment that is much more like our traditional panel assessments, but it's not going to be within a discipline, and you're going to need to have the public that you're engaged with have feedback for assessment at the end. 

For mission directed research, those are usually large public funds coming through an agency, and the scientist is going to give up some of their autonomy to be involved in those projects. You don't get to pursue your curiosity wherever it leads when you're on a mission directed research project. You have to stay focused on the project, but then you don't have to worry about applying for a grant; the funding's already been allocated for the particular project.

For regulatory research I think we should have a money laundering scheme. That should be privately funded, but it should go through a public agency that then distributes the money to researchers who are doing the work so the research can't be bent with cherry picking, methodological scheming. There's so many ways to bend studies to get the results you want and contract research organisations are not serving the public interest by producing the results that their funders want, but not helpful for actual regulatory decisions. 

Big science infrastructure is already largely democratically decided. But I think those are the cases where we could actually do science courts. If you're talking about a $10 billion investment, I think it'd be really great to have an advocate and a critic argue a case in front of a jury and persuade them or disagree with them that this is a good allocation of funds. I think it would be a really important and valuable public spectacle for the value of these kinds of things. It could also keep us from continually keeping big science infrastructure going when no longer serving the interests of science, no longer serving the interest of society but serving the interests of the scientists who now train all their postdocs and students at the facility and want to hire them at the facility, and the politicians who have the facility in their backyard want to keep it funded. Everything becomes conflicted, and that's a problem.

Private interest research should mostly be funded by private interests. They can do what they want.

So anyway, six different kinds of funding. Notice that for curiosity-based research, you have this amazing freedom. But you're no longer trying to think about showing that you're providing some public benefit. I think we can argue that kind of research is a very long-term cultural and societal investment. It usually takes two to three decades at least, for anything to come out of that. We should also expect the majority of that research to not produce anything valuable, because that's what happens with actual science. That gives scientists a lot of freedom, and that protects that from political machinations, having that lottery system, because we acknowledge it's a random draw, we're not picking favourites. But then, the scientists still bear the responsibility. We're not going to have accountability structures for that – and that's where the assistance in their home institutions is really valuable.

Thomas Spiteri: That's really interesting. This proposal to move beyond the basic, applied, divide, and fund research through multiple mechanisms including lotteries, raises questions about value governance. How we govern values in science. In a context where scientific inquiry is inevitably shaped by diverse values, often marked by moral and political disagreement, are there mechanisms or institutional arrangements that ensure that values shaping the science are both legitimate and consistent with scientific integrity, but somehow balanced? Particularly when research has policy implications and public trust is at stake.

Heather Douglas: I tend not to want to talk about balancing values because I'm not sure a lot of values can be balanced. I'm not sure that where a balance between values might be exactly a wrong place to be. In terms of the social contract for science, one of the things that I think is really important is to bring in, to foster full diversity within the scientific community. So that people from all the different kinds of backgrounds and experiences and cultural perspectives are participating members, functioning scientists within the scientific community. I want members of society to see people like themselves someplace in the scientific community. That, I think is really important for public trust, that they see someone who is like themselves. Science should not be an elite institution – that is really central. I think that will also, all the work on diversity in science suggests it also will, make the science better. 

Yay, win-win, that one should be easy! We know it's really hard in practice. Actually ensuring that people with very different backgrounds. In the US, kids from rural areas, especially low-income rural areas tend not to go into these practices and tend not to pursue science, so there's a real problem. When you have scientists speaking to the public, and the public doesn't see themselves in the scientists. Then there's issues of gender and, ethnicity and – wow. So that is a big part of how this should work in practice with a new social contract.

Then if you have that diversity within science, and having this kind of range of mechanisms, and especially a lot of your curiosity-based research, public funds will go to different kinds of research agendas within the curiosity-based research. If you have more pressing societal concerns and values and you want to direct research in particular ways, that's when you do the mission directed research.

This is where people might get a little uncomfortable, if you have a new government come in and they say: we really want to shift priorities and we want to have a mission directed research project on this issue over here. I'm thinking about vaccines in the US right now, they can do that. I think that is actually legitimate politically and scientifically, as long as that doesn't also then dictate the results. You still have to allow scientists to not be afraid to say: yeah, we didn't actually find any more evidence of any link between MMR vaccines and autism, still not finding it. I have no problem with the government doing another study about this issue; I suspect it's a waste of funds. As long as the results don't end up being distorted to serve the political masters, that's where the issue comes in. That's when you have to have the structures around mission directed research that still preserve scientists ability to speak truth to power. They can't be punished for not coming up with the right answer, so to speak, or the desired answer.

Then, with engaged public research we have an opportunity to distribute funds where scientists would be expressly exploring issues for particular communities, for their values, for their concerns in whichever community we're talking. That becomes the most explicitly value inflected and democratically robust form of science.

The funding structures could support a really interesting kind of democratisation. But also again, when you're doing engaged public research, you don't just come up with the answer that your public wants. You're still doing research, and you might find, for example, that the thing they're worried about is not the cause of what appears to be the cancer cluster, but it's actually something else. Or maybe it's not an actual cluster. It turns out that what they thought were elevated rates turn out not to be elevated. That's a result. But if you are in a relationship with a community, those things can be communicated and discussed with them, instead of being like a pronouncement from on high. It's like no, actually, here's why we think this, and here's where we looked, and here's how we tested this, and this is what we're finding, and what do you all think about that? That's a very different kind of practice than just a declaration.

I'm still struggling to think clearly about what are the responsibilities of scientists and what kind of assistance they need to meet them? Yeah, I can think about accountability as distinct from responsibility, and that's very helpful. What kinds of accountability structures do we need? When is it clear enough and when is it weighty enough that we need new accountability structures, or do we need to change our accountability structures? Because the requirements are actually not as clear as we think they are. 

If you think about animal oversight committees in the US the standards, the animal care standards are very clear. But when is a study, when is the knowledge be gained worth the suffering you're imposing on the animal? That's really tricky. I don't think we have a clear standard, and I think one of the reasons why some people have criticised IACUC codes is because different IACUC codes come up with very different answers because there aren't clear standards. Yet, that's an accountability structure, so that's a problem.

These are, I think, really tricky, thorny philosophical issues. I think also thinking philosophically about the structure of the institutions and the governance of science, philosophers of science avoided thinking about anything other than the epistemics of science for so long. Now we're in the thick of thinking about the ethics, and epistemics, and even the politics of science. Thank goodness, we are there, we are treating science as the fully complex human practice that it is. But now we also need to think about the institutional structures philosophically, and what the demands there should be. It's hard, it's really hard. It's hard for me to wrap my brain around, I have a hard time thinking about this, and different countries have such different structures, legal structures. I didn't know this, but In the UK, you need a license to actually have animals in your lab, you have to be licensed.

Thomas Spiteri: I'm not sure about Australia. That's interesting. 

Heather Douglas: They have a licensing structure, so you can lose your license to have animals in your lab. And I thought, what! That's a really interesting governance structure. I think there's so much to learn, cross border comparisons, but that work is really hard. It's really hard to see over across the border and see what things are actually like someplace else. But that's where we have tremendous opportunities for understanding what does and doesn't work. Why it works, why it doesn't work, what happens. That's really important for informing what should happen, because unless we see the possibilities it's really hard to make the normative arguments. 

Thomas Spiteri: Heather, thank you so much for a wonderful discussion. I'm sure our listeners, particularly as more and more researchers grapple with questions about the ethics of science and the responsibilities that come with doing research in today's world, will find this conversation incredibly valuable. These are issues that shape not just the individual career, but the future of science itself. Your insights here will be, I'm sure, very important for anyone navigating this complex landscape. So, thank you very much. 

Heather Douglas: Thank you for your work. This was a great set of questions and a lovely conversation, so thank you for having me.

____________________

Stay connected with us on social media, including Blue Sky for updates, extras, and further discussion. And finally, this podcast would not be possible without the support of the School of Historical and Philosophical Studies at the University of Melbourne and the Hansen Little Public Humanities grant scheme.

We look forward to having you back again next time. 

Transcribed by Christine Polowyj

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The P-Value Podcast Artwork

The P-Value Podcast

Rachael Brown
Let's Talk SciComm Artwork

Let's Talk SciComm

Unimelb SciComm
Time to Eat the Dogs Artwork

Time to Eat the Dogs

Michael Robinson: historian of science and exploration
Nullius in Verba Artwork

Nullius in Verba

Smriti Mehta and Daniël Lakens
Narrative Now Artwork

Narrative Now

Narrative Now
On Humans Artwork

On Humans

Ilari Mäkelä
Simplifying Complexity Artwork

Simplifying Complexity

Sean Brady from Brady Heywood