Margherita Harris

Margherita Harris

About

Margherita Harris

I am a Postdoctoral Fellow at the Center for Philosophy of Science at the University of Pittsburgh. I am also a Visiting Fellow at the Department of Philosophy, Logic and Scientific Method at the London School of Economics, where I completed my PhD. My research interests lie in epistemology and the philosophy of science, with a special focus on modelling under uncertainty, robustness analysis and climate science. I am also interested in foundational issues in the philosophy of statistics.

Contact me here! Curriculum Vitae

Publications and some work in progress

"Climate Models and Robustness Analysis - Part I: Core Concepts and Premises." In Gianfranco Pellegrino & Marcello Di Paola (ed.) Handbook of the Philosophy of Climate Science. Cham: Springer, (2023), (with Roman Frigg). Abstract

Robustness analysis (RA) is the prescription to consider a diverse range of evidence and only regard a hypothesis as well-supported if all the evidence agrees on it. In contexts like climate science, the evidence in support of a hypothesis often comes in the form of model results. This leads to model-based RA (MBRA), whose core notion is that a hypothesis ought to be regarded as well-supported on grounds that a sufficiently diverse set of models agrees on the hypothesis. This chapter, which is the first part of a two-part review of MBRA, begins by providing a detailed statement of the general structure of MBRA. This statement will make visible the various parts of MBRA and will structure our discussion. We explicate the core concepts of independence and agreement, and we discuss what they mean in the context of climate modelling. Our statement shows that MBRA is based on three premises, which concern robust properties, common structures, and so-called robust theorems. We analyse what these involve and what problems they raise in the context of climate science. In the next chapter, which is the second part of the review, we analyse how the conclusions of MBRA can be justified.

"Climate Models and Robustness Analysis - Part II: The Justificatory Challenge." In Gianfranco Pellegrino & Marcello Di Paola (ed.) Handbook of the Philosophy of Climate Science. Cham: Springer, (2023), (with Roman Frigg). Abstract

Robustness analysis (RA) is the prescription to consider a diverse range of evidence and only regard a hypothesis as well-supported if all the evidence agrees on it. In contexts like climate science, the evidence in support of a hypothesis often comes from scientific models. This leads to model-based RA (MBRA), whose core notion is that a hypothesis ought to be regarded as well-supported on grounds that a sufficiently diverse set of models agrees on the hypothesis. This chapter, which is the second part of a two-part review of MBRA, addresses the thorny issue of justifying the inferential steps taking us from the premises to the conclusions. We begin by making explicit what exactly the problem is. We then turn to a discussion of two broad families of justificatory strategies, namely top-down and bottom-up justifications. In the latter group we distinguish between the likelihood approach, independence approaches, and the explanatory approach. This discussion leads us to the sober conclusion that multi-model situations raise issues that are not yet fully understood and that the methods and approaches that MBRA has not yet reached a stage of maturity. Important questions remain open, and these will have to be addressed in future research.

"The Epistemic Value of Independent Lies: False Analogies and Equivocations". Synthese (2021). Abstract

I critically assess an argument put forward by Kuorikoski et al. (2010) for the epistemic import of model-based robustness analysis. I show that this argument is not sound since the sort of probabilistic independence on which it relies is unfeasible. By revising the notion of probabilistic independence imposed on the models' results, I introduce a prima-facie more plausible argument. However, despite this prima-facie plausibility, I show that even this new argument is unsound in most if not all cases of model-based robustness analysis. This I do to demonstrate that the epistemic import of model-based robust analysis cannot be satisfactorily defended on the basis of probabilistic independence.

"Some Conceptual Problems in the IPCC Uncertainty Framework, How They Came About, and Where to Go from Here". Unpubished Manuscript. Abstract

This paper has two related objectives. The first is to offer a thorough diagnosis of some conceptual problems in the IPCC uncertainty framework. I believe that any successful attempt to revise and improve the framework will have to start from a clear understanding of the current conceptual problems, their implications for the IPCC authors’ treatment of uncertainties, and the quality of the information provided in IPCC reports. Accordingly, the first objective of this paper is to contribute to this first step. I begin by showing that there is no interpretation of ‘confidence’ and ‘likelihood’ that is compatible with the IPCC uncertainty guide’s recommendations - and thus with the resulting practice of the IPCC authors in their communication of uncertainty. I argue that the lack of a conceptually valid interpretation of ‘likelihood’ and ‘confidence’ has worrying implications for both the practice of the IPCC authors in their treatment of uncertainties and the quality of the information provided in IPCC reports. Finally, I show that an understanding of the reasons behind the decision to include two uncertainty scales in the IPCC uncertainty framework can give us some interesting insights into the nature of this problem. Its second objective is to critically reflect on what an adequate IPCC uncertainty framework could look like. I critically assess two strikingly different proposals for a new IPCC uncertainty framework (Winsberg, 2018; Bradley et al., 2017) and I argue that in both proposals the interpretation of ‘confidence’ is conceptually problematic. Finally, I offer my own tentative proposal that meets what I take to be two basic desiderata for an adequate uncertainty framework: conceptual clarity and decision relevancy.

"The IPCC Uncertainty Framework: What Some Decision Makers Want (and Why They Shouldn’t)". Manuscript available on request. Abstract

In “Climate Change Assessments: Confidence, Probability and Decision,” Bradley et. al. offer some suggestions for clarifying the relationship between “confidence” and “likelihood” in IPCC reports and their role in decision making. This is an issue of critical importance for the IPCC, given that its main role as an institution is that of informing behaviour and policy. However, in this discussion note, I argue that their account of “confidence” and “likelihood” cannot be implemented by the IPCC in a conceptually coherent way. Hence, I will conclude that Bradley et al.’s suggestions should not be taken seriously.

"Model-Based Robustness Analysis as Explanatory Reasoning: Arbitrary Conjunctions and Incompatibilities". Manuscript available on request. Abstract

In science, obtaining a “robust” result is often seen as providing further support for a hypothesis. The Bayesian should have something to say about the logic underpinning this method of confirmation. Schupbach’s recent explanatory account (2018) of robustness analysis (RA) is a welcome attempt to do so. Indeed, by having ‘as its central notions explanation and elimination’, this account seems to fit very nicely with many empirically driven cases of RA in science, thereby revealing why these cases are able to lend confirmation to a hypothesis. The subject of this paper, however, is Schupbach’s further claim that his account of RA ‘applies to model-based RAs just as well as it does to empirically driven RAs’, since when we arrive at this claim, he and I decisively part ways. I argue that the application of Schupbach’s account to model-based RAs is considerably more complicated than he and others (such as Winsberg (2018)) suggest and relies on several non-trivial and often implausible assumptions.

"Conceptualizing Uncertainty: The IPCC, Model Robustness and the Weight of Evidence". PhD Thesis (2021). Abstract

The aim of this thesis is to improve our understanding of how to assess and communicate uncertainty in areas of research deeply afflicted by it, the assessment and communication of which are made more fraught still by the studies’ immediate policy implications. The IPCC is my case study throughout the thesis, which consists of three parts. In Part 1, I offer a thorough diagnosis of conceptual problems faced by the IPCC uncertainty framework. The main problem I discuss is the persistent ambiguity surrounding the concepts of ‘confidence’ and ‘likelihood’; I argue that the lack of a conceptually valid interpretation of these concepts compatible with the IPCC uncertainty guide’s recommendations has worrying implications for both the IPCC authors’ treatment of uncertainties and the interpretability of the information provided in the AR5. Finally, I show that an understanding of the reasons behind the IPCC’s decision to include two uncertainty scales can offer insights into the nature of this problem. In Part 2, I review what philosophers have said about model-based robustness analysis. I assess several arguments that have been offered for its epistemic import and relate this discussion to the context of climate model ensembles. I also discuss various measures of independence in the climate literature, and assess the extent to which these measures can help evaluate the epistemic import of model robustness. In Part 3, I explore the notion of the ‘weight of evidence’ typically associated with Keynes. I argue that the Bayesian (or anyone who believes the role of probability in inductive inference is to quantify the degree of belief to assign to a hypothesis given the evidence) is bound to struggle with this notion, and draw some lessons from this fact. Finally, I critically assess some recent proposals for a new IPCC uncertainty framework that significantly depart from the current one.

Teaching

During my time at the LSE I have been a teaching assistant for the following courses:

"The Big Questions: An Introduction to Philosophy" (2017-19, 2021-23)

"Einstein for Everyone: From time travel to the edge of the universe" (2019/20)

"Genes, Brains and Society" (2021/22)

"Historical and Global Perspectives on Philosophy" (2022)

"Philosophy of Science" (2023)

Before joining LSE I also taught Mathematics (A-level and STEP) at a school in Cambridge and I have been a part-time examiner for A-level mathematics international examinations.

I am comfortable teaching introduction to philosophy, philosophy of science, decision theory, formal epistemology, philosophy of statistics, introductory logic and formal methods for philosophers.

Activities

statwars I recently organized (with Deborah Mayo and Roman Frigg) The Statistics Wars and Their Causualties worshop. In case you missed it, you can find all the recorded talks and panel discussions here!

C&R During my time at LSE I have been co-organizing the LSE's Conjectures and Refutations seminar series, which brings together philosophers and scientists with a shared interest in the philosophy of science. If this might be for you, check out this website and join the mailing list!

Selected Presentations

“Some Conceptual Problems in the IPCC Uncertainty Framework, and Where to Go from Here”, Center for Philosphy of Science, Pittsburgh, November 2023.
“The Detrimental Impact of Quantifauxcation on Our Understanding of the Evolution of the Climate”, European Philosophy of Science Association, Belgrade (online), September 2023.
"The Ubiquity of Quantifauxcation and Why it Must Stop", 17th International Congress on Logic, Methodology and Philosophy of Science and Technology, Buenos Aires, July 2023.
"On Severity, the Weight of Evidence, and the Relationship Between the Two", The statistics Wars and Their Casualities workshop , LSE, CPNSS (online), December 2022.
"Model Robustness: Schupbach’s Explanatory Account of Robustness Analysis to the Rescue?", Sigma Club , LSE (online), March 2022.
"Some Conceptual Problems in the IPCC Uncertainty Framework, and How They Came About", Conference on Climate Change and Studies of the Future, A Coruña, October 2021.
"What Does the Bayesian Have to Say about Model-Based Robustness Analysis?", Bayesian Epistemology: Perspectives and Challenges, Munich (online), August 2020.
"What Does the Bayesian Have to Say about Model-Based Robustness Analysis?", Choice Group, LSE (online), June 2020.
"The Epistemic Value of Independent Lies: False Analogies and Equivocations", London Graduate Philosophy Conference, London (Online), June 2020.
"On the Relationship between Confidence and Likelihood in the IPCC Uncertainty Framework", European Philosophy of Science Association, Geneva, September 2019.
"On the Relationship between Confidence and Likelihood in the IPCC Uncertainty Framework", British Society for the Philosophy of Science, Durham, July 2019.

Short Bio

Before getting into philosophy I studied mathematics at the University of Warwick (with the intention to keep studying it, but I clearly changed my mind at some point). One of my regrets is not to have realised earlier how interesting statistics can really be.


This website is inspired by NguyenMarcociRoberts → etc.