The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science
Leading scholars in History, Philosophy and Social Studies of Science (HPS) introduce contemporary topics for a general audience. Developed by scholars and students in the HPS program at the University of Melbourne.
Producers and Hosts: Samara Greenwood and Carmelina Contarino.
Season Four Now Out. New Episodes EVERY THURSDAY.
More information on the podcast can be found at hpsunimelb.org
The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science
S4 Ep 11 - Redux: Fiona Fidler on 'Collective Objectivity'
"It wouldn’t make sense to leave the entire burden of upholding objectivity in science on the shoulders of fallible individuals, right?" Prof. Fiona Fidler
Today, we return to one of our favourite episodes, with the person who first came up with the idea for our podcast – Professor Fiona Fidler.
Fiona is head of our History and Philosophy of Science Program at the University of Melbourne and co-lead of the MetaMelb research initiative.
In this episode Fiona first discusses her early move from psychology to HPS when she was struck by the ‘dodgy’ statistical methods she found prevalent in many sciences. She has since dedicated her career to studying scientific practice to help improve confidence in scientific claims.
Fiona introduces us to the concept of ‘Collective Objectivity’. Following from an earlier podcast on 'Values in Science' by Rachel Brown, Fiona discusses how contemporary understandings of objectivity have become more sophisticated.
Rather than viewing objectivity in science as solely the role of individuals, today we understand there are strong social dimensions to ensuring scientific outcomes are not unduly biased. As Fiona discusses, this not only requires diversity in 'who does science' to ensure multiple perspectives are incorporated, but we also need multiple collective review mechanisms to ensure we are developing truly robust, reliable, objective outcomes.
A full transcript of this episode can be found here
Relevant links:
- Profile: Prof Fiona Fidler (unimelb.edu.au)
- Blog Post: MetaMelb – A New Research Initiative (hpsunimelb.org)
- Website: MetaMelb research group
- Stanford Encyclopedia of Philosophy - Objectivity as a Feature of Scientific Communities & Objectivity as a Social Process
- What is P Hacking: Methods & Best Practices
Thanks for listening to The HPS Podcast with current producers, Samara Greenwood and Carmelina Contarino. You can find more about us on our blog, website, bluesky, twitter, instagram and facebook feeds. Music by ComaStudio.
This podcast would not be possible without the support of School of Historical and Philosophical Studies at the University of Melbourne.
HPS Podcast | hpsunimelb.org
Transcript of Fiona Fidler on 'Collective Objectivity'
Welcome back to the HPS podcast, where we discuss all things History, Philosophy and Social Studies of Science. I am your host, Samara Greenwood.
Today, I am thrilled to be highlighting one of our favourite episodes with special guest, Professor Fiona Fidler, in which she discusses the topic of ‘collective objectivity’
As well as head of our history and philosophy of science program here at the University of Melbourne, Fiona is co-lead of the MetaMelb Research Initiative, an interdisciplinary Metascience research group, which investigates scientific practice to help improve confidence in scientific claims.
In the first part of the interview, Fiona talks about her early move from psychology to history and philosophy of science after she became aware of ‘dodgy’ statistical methods she found prevalent across many sciences.
Common unsound research practices include ‘cherry-picking’ where data is only selectively presented in order to suggest statistical significance and statistical ‘P-Hacking’ where data is inappropriately analysed and reanalysed until patterns that can be presented as significant are determined.
Samara Greenwood: Hello, Fiona. Welcome finally to your turn on the HPS Podcast.
Fiona Fidler: Thank you. I'm very happy to finally be here.
Samara Greenwood: Now, first, how did you find your way to history and philosophy of science?
Fiona Fidler: I came to HPS from psychology. So it was in 1994 and I was doing my first independent research project in the honours program in psychology, and over the course of my undergraduate degree, I had heard a lot about the idea of statistical power as a method for determining sample size for experimental studies. [00:02:00]
This was all through my undergraduate stats classes, and so I thought I should do this in my own project. But the calculation was a bit harder than I anticipated because my experimental design for this real study was more complicated than the contrived or oversimplified designs we were working with in the stats lectures, and I couldn't do it myself. So, I approached various people for assistance with this, but it turned out that none of these people actually did calculations like this in their own work. I was repeatedly told not to worry about it, to just use the same sample size as other studies in the literature and so on. The inconsistency between what I was taught about statistical methods and what was practiced struck me.
So that was the first thing. This inconsistency. But I had a deadline, I wanted my degree, and so I went ahead and did my own study without it. And then during the data collection process, I encountered almost at every turn what we now call questionable research practices. [00:03:00]
I cherry picked, I P-Hacked, and then instead of being penalised for that, I got an award.
And I thought, without any of the concepts or technical language I've used just now, “this all seems a bit dodgy. Maybe it's not proper science? I wonder what that is?”
I stuck around in psychology for another year or so after honours trying to make sense of this unease. But it became increasingly clear to me that there was no way to pursue these types of questions from inside the discipline. So, I found the HPS program at Melbourne Uni, which seemed like the kind of weird place that people who've stumbled into problems in their own disciplines go to. And then the big moment for me was when I found Neil Thomason, who was teaching a class on the philosophy of statistics and inference. And that was that.
I know Kristian Camilleri mentioned Neil too, amongst some other great teachers and influences he had in HPS. And Neil went on to become my PhD supervisor. [00:04:00]
I had to take a whole lot of new coursework, basically redoing an undergraduate major before I could enrol in the PhD. But then my PhD examined those statistical problems related to Null Hypothesis Testing and Statistical Power, and I eventually finished it in 2005.
Samara Greenwood: Excellent. And did you then stay in HPS?
Fiona Fidler: Not exactly.
While I was studying those problems in psychology during my PhD, I realised that the same problems with statistical power and publication bias existed in medicine. And then I realized that they also existed in ecology, and it was the ecology part that particularly intrigued me at the time.
Neil Thomason, who I've already mentioned, my PhD supervisor, and Mark Bergman, who was a professor in ecology, had this ongoing argument about where the problems were worse - in which discipline - psychology or ecology. Mark and Neil would have these regular lunches together and have fights about whether statistical power was lower in psychology than it was in ecology. [00:05:00]
I think Mark eventually won these debates with stories about logging impact studies in forestry that were so underpowered you could literally see fields of dead owls and the studies would still report ‘no significant impact’.
In the end, the two of them, Neil and Mark, together with Geoff Cumming, a quantitative psychologist of estimation and confidence interval of fame, wrote a grant application to settle their bit about which discipline had the worst problems, and they were successful in getting that grant.
And working on that interdisciplinary project became my first job. It was a great team, and I think to this day, one of the best examples of interdisciplinary work that I've been part of. And I kind of fell in love with working in ecology and environmental science labs because the people are so good and smart and so good at interdisciplinary collaborations.
Over that decade that I spent in environmental science, I worked mainly on projects to reduce bias in expert solicitation. When you need to make an urgent conservation or environmental decision, there's no time to collect data. You rely on experts to forecast what they think will happen to a species population or post some intervention. [00:06:00]
But how you ask the experts about probability and uncertainty, the way you ask those questions, makes a big difference to the answers you get. So, we got interested in things like structured deliberation and decision protocols.
So, in a very long answer to your question, after my PhD, I didn't exactly stay in HPS. I worked in ecology and environmental science centres for over a decade. I also did a postdoc back in a psychology department for part of that time before eventually making my way back to HPS in about 2018.
Samara Greenwood: Excellent. You now work in Metascience and Metaresearch. Could you describe a bit for us what that involves?
Fiona Fidler: Sure.
Metascience is an old term, which over the last decade has acquired slightly new meaning and now refers to this, well, let's go with ‘interdisciplinary community of practice’ that has grown up in response to the replication crisis. You almost touched on this in a previous episode with Fallon, when you were talking about error detection in medicine and how we go about correcting the record. [00:07:00]
I like to think of Metascience as a kind of ‘interventionist’ HPS. So, there are problems in scientific practice like the problems with statistical practice that I've mentioned already, and publication practices which result in systematic biases, like an inflation of false positives in the literature.
In Metascience, we document problems like this, but we also create intervention and evaluation programs for them, like alternative methods of peer review, for example, which our long running repliCATS Project focuses on, which you can read about in the HPS blog associated with this podcast!
There are other definitions of Metascience around that I'm not so fond of. For example, it's not uncommon to hear Metascience defined as ‘the science of science’. I think this is meant to capture, I guess, the idea that we often use more quantitative tools and techniques than traditional HPS or STS. [00:08:00]
But my feeling about Metascience is that it should not be defined by its methods. And in calling it ‘the science of science’, we rule out contributions from philosophers or sociologists or historians of science and so on, when we actually desperately need those people to get into the program.
So Metascience, instead of being defined by its methods, should be defined by its goals, which include intervening to improve science, and undertaking rigorous evaluation of those interventions.
Samara Greenwood: In the next part of our interview, Fiona introduces us to the concept of collective objectivity.
As Rachel Brown mentioned in a previous episode of the podcast, it is now understood that it is neither possible nor desirable for science to be value free. Humans cannot view nature from a purely objective or value free position. There is no possible view from nowhere. [00:09:00]
Inevitably, scientists bring to their work different viewpoints, experiences, perspectives, and values, which - when used appropriately - are helpful in illuminating multiple aspects of complex natural phenomena, as well as the shadows in the partial views of others.
The philosopher of science, Helen Longino, has called this more sophisticated understanding of the social aspect of good, reliable science ‘intersubjective objectivity’ or also ‘collective objectivity’.
Samara Greenwood: Your topic today is collective objectivity. Could you first give us a brief introduction to collective objectivity and how it differs from the more classic conception of individual objectivity?
Fiona Fidler: Well, objectivity is an everyday term, but maybe one that needs a deeper understanding than we commonly afford it.
I've chosen this as my concept to talk about based on a recent encounter I had at the SIPS conference in Italy. That's the ‘Society for Improving Psychological Science’ conference. I was in a session there about subjectivity in science. And Hi Nina and Rin, if you're listening, that was your session. [00:10:00]
Now, we know that science is done by people who have values, who make mistakes, and so on. Indeed, you recently had an episode of this podcast with Rachel Brown discussing Values in Science, and that ‘Value-Free Ideal’ that Rachel was talking about is obviously intertwined with the ideal of ‘Objectivity’ in science. When scientists strive for objectivity, there are certain processes we implement to reduce bias or to minimize our influence on the study. These are things like blinded or double-blinded experiments or blinded data analysis - where we hide from ourselves which variable we're statistically analysing so that our prior hypotheses or beliefs don't accidentally lead us to make decisions about which transformations to perform, or whether to delete outliers, or whether to try alternative models, based on what we expect or want to find. [00:11:00]
Those are all great initiatives, and scientists should definitely keep doing all of those things. None of what I'm about to say next means we give up on this work that individuals do.
But, it just wouldn't make sense to leave the entire burden of upholding objectivity in science on the shoulders of fallible individuals, right?
We can lift the burden on individuals with collective mechanisms for maintaining objectivity. Secondary layers of checking. Things like independent replication, like peer review, like statistical consistency checks and error detection. And these obviously can't be done by just one individual - they need a collective. Through those practices, objectivity becomes a social process.
In a way, this should be a huge relief to scientists. The weight of objectivity is not on their shoulders alone. It also opens up scope for them to actually acknowledge that they are humans with values. Again, I refer back to Rachel Brown's previous episode.
But these secondary mechanisms or collective processes, they don't just happen. They don't just fall magically out of a scientific method.
Processes like independent replication, peer review, error detection - all of these things require serious investment and infrastructure to perform their ‘collective objectivity’ duties, and they need to be incentivized and rewarded and funded and so on.
Samara Greenwood:
Classically, the notion of objectivity has been focused on individuals and individual practices as being almost entirely responsible for ensuring scientific work is not unduly biased. However, as Fiona points out, a more sophisticated understanding of values and objectivity as studied by philosophers such as Helen Longino, as well as Heather Douglas and others, naturally leads us to value the importance of collectively driven objectivity, not only in the broad sense, but also at a very pragmatic, fine-grained level. [00:13:00]
In other words, we not only require diversity in ‘who does science’, ensuring multiple perspectives are incorporated. We also require multiple review mechanisms at the community level to ensure we are developing truly robust, reliable, collectively objective scientific outcomes.
While there are some very general collective review mechanisms already in place, such as standard peer review, we now know current systems are no longer adequate for today's complex scientific world.
Fiona argues that to better achieve the ongoing scientific goal of objective, reliable results, we need to invest far more heavily in establishing stronger collective mechanisms for maintaining this objectivity. Also, we need more appropriate incentive structures in the funding of science, the promotion of scientists and scientific awards. Ultimately, we need to reward people and practices that support collective objectivity rather than penalise scientists for attempting to make science better. [00:14:00]
Samara Greenwood: Could you provide an example of collective objectivity in practice?
Fiona Fidler: Peer review is one obvious example. Independent replication is another one that perhaps more clearly demonstrates this point I want to make about investment.
So, psychology, many other social sciences, and also many biomedical sciences, have experienced a period of - let's call it ‘turmoil’ to avoid overuse of the word ‘crisis’ - over the last decade and a half. There have been many large-scale studies which take dozens or hundreds of published experiments and attempt to replicate them. And these have returned very low replication success rates. But that's not the actual story I want to tell.
The story is that, prior to those big replication projects, the rate of replication in many sciences was itself alarmingly low. The number of published studies in psychology which were themselves a replication of previous work was about one in 1000. And in ecology it was even lower, about one in 5,000. [00:15:00]
Now, there are lots of reasons a study might be hard to replicate or that people might not be doing this work, or it might not make sense to do this work, like regrowing forests in ecology, for example. Not a very practical thing to do. But, even discounting those examples, these figures are so low.
Could we really claim that we'd invested in replication with numbers that are that low? And if we haven't invested in replication, then we've dropped off one of the main mechanisms for collective objectivity. Like a big one. How do we think objectivity will work without it?
And it's the same story with peer review. So, if we look at the number of glaring, easily detectable errors that make it through peer review, we should ask, are we sufficiently investing in this super important mechanism for maintaining objectivity?
I could talk more about post-publication error detection too, and how frequently errors are found and how infrequently they result in retractions or corrections. Those numbers are shocking. Like, just shocking. [00:16:00]
But if we move from thinking about objectivity as something that's a responsibility of an individual to something that's maintained by a collective, then I think that makes super salient the case for needing to invest in replication, in peer review and in pre and post error detection.
Samara Greenwood: So how do you think the concept of collective objectivity might be of value to a general audience?
Fiona Fidler: Well, first of all, I really think it should be of value.
I really believe that understanding objectivity in this way - as a collective social process - and not just something you get because an individual scientist used a blinded protocol or whatever, makes the case for investing more in those secondary mechanisms.
We seriously need funders and publishers to understand the contribution that independent replications make or that post-publication error detection makes, and to provide proper infrastructure for peer review. [00:17:00]
That's where the work is. That's where the money's needed. That's what scientist’s workload models need to account for. That's the work they need prizes and awards for.
I honestly think that a more sophisticated understanding of objectivity would lead to a better incentive structure in science. Right now, the relative investment that funding agencies and publishers are making in these collective objectivity measures - well, it's abysmal.
I've already mentioned that only a tiny percentage of articles in psychology and ecology are replication studies, and that's in part because it's so difficult to get those kinds of studies funded and published. There's just no reward for doing them. We know peer review, at the moment, is completely unstructured, undervalued, underdone, and there are clear pathways for how to do it better, but they will be more expensive and more time consuming. [00:18:00]
Increasing numbers of errors make it into the published literature. One study of 20,000 papers by Elisabeth Bik suggested that 4% of the literature has serious errors, and - even after being reported - years later, only a fraction of those get retracted or corrected. And for the people who detect and report those errors, like Elisabeth Bik in microbiology, or James Heathers in Psychology and Physiology, and many, many others - for them - there's no support for their efforts, and worse, they routinely get ridiculed and threatened.
I think understanding the concept of collective objectivity is important for a broad audience because it's the best way to appeal for investment in the mechanisms that maintain that objectivity.
Samara Greenwood: As a final question, I'm also interested to know where you believe Metascience is heading in the future, and in particular, how historians, philosophers, and sociologists of science might contribute to this.
Fiona Fidler: It's a really good question and there are a lot of ways [00:19:00] I could answer it, but I'll stick to the theme of the episode.
I think HPS and STS people are critical to the Metascientific community. And I hope that this kind of framing of objectivity as a collective process is a good example of how concepts from HPS can help Metascience. Understanding and talking about objectivity this way offers an imperative to properly support and fund processes like replication, peer review, error detection, because they are the objectivity making processes. And everyone loves objectivity, even journals and funders. It's my feeling that ‘processes that maintain objectivity’ is an easier sell than ‘processes that fix mistakes’.
I think the next step for HPS and STS people in helping the Metascience community is to talk to us about how all of these practices that I've talked about today relate to ‘Trust in Science’ at a time when obviously the world desperately needs it. [00:20:00]
But I'm sure ‘Trust’ will be another episode...
Samara Greenwood: Absolutely. We'll definitely follow that one up.
Thank you so much, Fiona, for conceiving of the podcast in the first place. It has been a wonderful thing.
Fiona Fidler: Well, that was definitely a joint project. But it has been a privilege to talk to you. Thank you.