Enjoyed the episode? Want to listen later? Subscribe by searching 80,000 Hours wherever you get your podcasts, or click one of the buttons below:

You might think, OK, I know that the immediate effects of funding anti-malarial bed nets are positive – I know that I’m going to save lives. But I also know that there are going to be further downstream effects and side-effects of my intervention. For example, effects on the size of future populations. It’s notoriously unclear how to think about the value of future population size, whether it’ll be a good thing to increase population in the short term, or whether that would in the end be a bad thing. There are lots of uncertainties here.

Hilary Greaves

The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you’re never getting back? According to philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute, which is hiring now – this simple decision will completely change the long-term future by altering the identities of almost all future generations.

How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day — including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child.

By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child – who didn’t exist because of what you did – would have done if you decided not to worry about it.

As that child’s actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both!

Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you’re totally clueless about what’s going to lead to the best outcomes. It might lead to decision paralysis — you won’t be able to take any action at all.

Prof Greaves doesn’t share this concern for most real life decisions. If there’s no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by the equally plausible possibility that you might make things worse in equally unpredictable ways.

But, if instead we’re talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse — for example if we increase economic growth — Prof Greaves says that we don’t get to just ignore the unforeseeable effects.

When there are complex arguments on both sides, it’s unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers.

So, what do we do?

Today’s episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover:

  • What’s the long term vision of the Global Priorities Institute?
  • How controversial is the multiverse interpretation of quantum physics?
  • What’s the best argument against academics just doing whatever they’re interested in?
  • How strong is the case for long-termism? What are the best opposing arguments?
  • Are economists getting convinced by philosophers on discount rates?
  • Given moral uncertainty, how should population ethics affect our real life decisions?
  • How should we think about archetypal decision theory problems?
  • The value of exploratory vs. basic research
  • Person affecting views of population ethics, fragile identities of future generations, and the non-identity problem
  • Is Derek Parfit’s repugnant conclusion really repugnant? What’s the best vision of a life barely worth living?
  • What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations?
  • How could reducing global catastrophic risk be a good cause for risk-averse people?
  • What’s the core difficulty in forming proper credences?
  • The value of subjecting EA ideas to academic scrutiny
  • The influence of academia in society
  • The merits of interdisciplinary work
  • The case for why operations is so important in academia
  • The trade off between working on important problems and advancing your career

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Key points

There are interestingly different types of thing that would count as a life barely worth living, at least three interestingly different types. It might make a big difference to somebody’s intuitions about how bad the repugnant conclusion is, which one of these they have in mind. The one that springs most easily to mind is a very drab existence where you live for a normal length of time, maybe say 80 years, but at every point in time you’re enjoying some mild pleasures. There’s nothing special happening. There’s nothing especially bad happening.

Parfit uses the phrase ‘muzak and potatoes’, like you’re listening to some really bad music, and you have a kind of adequate but really boring diet. That’s basically all that’s going on in your life. Maybe you get some small pleasure from eating these potatoes, but it’s not very much. There’s that kind of drab life.

A completely different thing that might count as a life barely worth living is an extremely short life. Suppose you live a life that’s pretty good while it lasts but it only lasts for one second, well, then you haven’t got time to clock up very much goodness in your life, so that’s probably barely worth living.

Alternatively you could live a life of massive ups and downs, so lots of absolutely amazing, fantastic things, lots of absolutely terrible, painful, torturous things, and then the balance between these two could work out so that the net sum is just positive. That would also count as a life barely worth living. It’s not clear that how repugnant the repugnant conclusion is is the same for those three very different ways of thinking about what these barely worth living lives actually amount to.

If you think, “I’m risk-averse with respect to the difference that I make, so I really want to be certain that I, in fact, make a difference to how well the world goes,” then it’s going to be a really bad idea by your lights to work on extinction risk mitigation, because either humanity is going to go extinct prematurely or it isn’t. What’s the chance that your contribution to the mitigation effort turns out to tip the balance? Well, it’s minuscule.

If you really want to do something in even the rough ballpark of maximizing the probability that you make some difference, then don’t work on extinction risk mitigation. But that line of reasoning only makes sense if the thing you are risk-averse with respect to was the difference that you make to how well the world goes. What we normally mean when we talk about risk aversion is something different. It’s not risk aversion with respect to the difference I make, it’s risk aversion with respect to something like how much value there is in the universe.

If you’re risk-averse in that sense, then you place more emphasis on avoiding very bad outcomes than somebody who is risk-neutral. It’s not at all counterintuitive, then, I would have thought, to see that you’re going to be more pro extinction risk mitigation.

So the basic prima facie problem is, if you say, “Okay, I’m going to do this quantum measurement and the world is going to split. So possible outcome A is going to happen on one branch of the universe, and possible outcome B is going to happen on the second branch of the universe.” Then of course it looks like you can no longer say, “The probability of outcome A happening is a half,” like you used to want to say, because “Look, he just told me. The probability of outcome A happening is one, just like the probability of outcome B happening.” They’re both going to happen definitely on some branch or other of the universe.

Many of us ended up thinking the right way to think about this is maybe to take a step back and ask what we wanted from the notion, or we needed from the notion of probability in quantum mechanics in the first place, and I convinced myself at least that we didn’t in any particularly fundamental sense need that the chance of outcome A happening was a half. What we really needed was for it to be rational to assign weight one half to what would follow from outcome A happening. And it be rational to assign weight to one half to what would follow if and where outcome B happened. So if you have some measure over the set of actually future branches of the universe, and in a specifiable sense the outcome A branches total measure one half and the outcome B branches total measure one half, then we ended up arguing you’ve got everything you need from probability. This measure is enough, provided it plugs into decision theory in the right way.

Transcript

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.

Today’s interview with Hilary Greaves will be a blast for people who like philosophy or global priorities research. It’s especially useful to people who might want to do their own research into effective altruism at some point.

Before that I just wanted to flag a few opportunities at Hilary’s Global Priorities Institute, or GPI, that listeners might want to know about. If that’s not you, you can skip ahead 2 minutes.

GPI aims to conduct and promote world-class, foundational research on how to most effectively do good, in order to create a world in which global priorities are set by using evidence and reason.

To that end, GPI just started looking for a Head of Research Operations who’ll report directly to Hilary and will be responsible for all aspects of GPIs operations — including strategy, communications, finances and fundraising. They’re looking for someone with an analytic mindset, a demonstrated track record independently managing complex projects, and the ability to communicate and coordinate with others. It’s a 4 year contract, paying £40-49,000, and people can apply from anywhere in the world. Applications close the 19th of November, so you’ll want to get on that soon.

They also have a Summer Research Visitor Programme, and are looking for economists working on their PhD, or early in their career, who want to come visit the institute in summer 2019. Applications for that close 30th of November.

Both of these opportunities are advertised on their site, and of course we’ll link to them from the show notes and blog post.

GPI will also soon start advertising a series of postdoctoral fellowships and senior fellowships for both philosophers and economists, to start next September.

In the meantime they’re keen to explore the possibility of research positions with interested and qualified researchers. If you are a researcher in philosophy or economics who either already works on GPI’s research themes or is interested in transitioning into research in these areas, and you might be interested in working at or visiting GPI, please send a cover letter and CV to [email protected]

The above applies to everyone from Masters students through to emeritus professors.

Alright, on with the show – here’s Hilary.

Robert Wiblin: Today I’m speaking with Hilary Greaves. Hilary is a philosophy professor at the University of Oxford, and director of the Global Priorities Institute there. Besides issues in effective altruism and global priorities, her research interests include foundational issues in consequentialism, the debate between consequentialists and contractualists, issues of interpersonal aggregation, moral psychology and selective debunking arguments, the interface between ethics and economics, and formal epistemology. There’s quite a lot to talk about there, so thanks for coming on the podcast Hilary.

Hilary Greaves: Thanks for inviting me, it’s great to be here.

Robert Wiblin: So I hope to talk later about how people can potentially advance their careers in global priorities research and perhaps even work at the Global Priorities Institute itself, but first, what are you working on at the moment and why do you think it’s really important?

Hilary Greaves: Okay. So I’ve got three papers in the pipeline at the moment, all motivated more or less by effective altruist type concerns. One concerns moral uncertainty, this is the issue of what’s the right structural approach to making decisions when you’re uncertain about which normative claims are correct, or if you’re in any other sense torn between different normative views. And there I’m exploring a nonstandard approach that goes by the name of the Parliamentary Model, which is supposed to be an alternative to the standard expected value kind of way of thinking about how to deal with uncertainty. A second thing I’m doing is trying to make as rigorous and precise as possible the common effective altruist line of thought that claims that, in so far as you’re concerned with the long run impact of your actions rather than just, say, their impact within the next few years, you should be directing your efforts towards extinction risk mitigation, rather than any one of the other countless causes that you could be directing them towards instead.

Hilary Greaves: Then the third thing I’m doing is more a matter of methodology for cost-benefit analysis. Economists routinely use tools from cost-benefit analysis to make public policy recommendations. Typically, they do this by measuring how much a given change to the status quo would matter to each person, measured in monetary terms, and then adding up those amounts of money across people. Philosophers typically think that you shouldn’t just add up monetary amounts, you should first weight those monetary amounts according to how valuable money is to the person, and then sum up the result in weighted amounts across persons. And there’s a lively debate between philosophers and foundationally minded economists on the one hand, and lots of other economists – I should say, including lots of foundationally minded economists – on the other hand about whether or not one should apply these weights. So there’s a bunch of arguments in that space that I’m currently trying to get a lot clearer on.

Robert Wiblin: Right. So yeah, we’ll come back to a bunch of those issues later, but I noticed when doing some preparatory research for this episode that it seemed like you had a major shift in what you were focusing on for your research. You started out mostly in philosophy of physics, and now you’re almost entirely doing moral philosophy. Yeah, what caused you to make that shift and what were you looking at to begin with?

Hilary Greaves: So yeah, I originally did an undergraduate degree in physics and philosophy, and that’s how I got into philosophy. So I kind of by default ended up getting sucked into the philosophy of physics, because that was at the center of my undergraduate degree, and I did that for a while including my PhD. But I think it was always clear to me that the questions that had got me interested in philosophy more fundamentally, rather than just as part of my degree, were the questions that I faced in everyday practical life.

Hilary Greaves: What were the reasons for acting this way and that? How much sense did this rationale somebody was giving for some policy actually make? And then eventually, after working in research for a few years, I felt that I was just ticking the boxes of having an academic career by carrying on writing the next research article in philosophy of physics that spun off from my previous ones, and that really wasn’t a thing I wanted to do. I felt it was time now to go back to what had originally been my impetus for caring about philosophy, and start thinking about things that were more related to principles of human action.

Robert Wiblin: Yeah. What exactly in philosophy of physics were you doing?

Hilary Greaves: So in philosophy of physics most of my research centered on the interpretation of quantum mechanics, where there are really confusing issues around how to think about what happens when a quantum measurement event occurs. So if you have some electron with some property and you want to measure this property, standard quantum mechanics will say that the system that you’re measuring proceeds according to one set of rules where nobody’s carrying out a measurement. But then when the experimental physicist comes along and makes a measurement, something totally different happens, some new rule of physics kicks in that only applies when measurements occur and doesn’t apply at any other time, and on a conceptual level this, of course, makes precisely no sense, because we know that measurements are just another class of physical interaction between two systems. So they have to obey the same laws of physics as every other physical process.

Hilary Greaves: So foundationalists of physics and philosophers of physics for a long time had tried to think about what the grand unifying theory could be that described quantum measurements without giving special status to measurements. One of the most prominent so called interpretations of quantum mechanics is the Many Worlds theory according to which, when a measurement occurs the world splits into multiple branches. And for complicated reasons this ends up being a story that makes sense without giving any fundamental status to the notion of measurement. It doesn’t sound like it does the way I put it but, honest, it does. So I got interested in this Many Worlds theory, and then I was working for quite a while on issues about how to make sense of probability within a many worlds theory, because probability is normally thought of as being absolutely central to quantum mechanics, but at first glance it looks like the notion of probability won’t any longer make sense if you go for a Many Worlds version of that theory.

Robert Wiblin: Alright. Yeah, how do you rescue probability? Is it a matter of there’s more worlds of one kind than another?

Hilary Greaves: Yeah, kind of. That’s what it ends up boiling down to, at least according to me. So the basic prima facie problem is, if you say, “Okay, I’m going to do this quantum measurement and the world is going to split. So possible outcome A is going to happen on one branch of the universe, and possible outcome B is going to happen on the second branch of the universe.” Then of course it looks like you can no longer say, “The probability of outcome A happening is a half,” like you used to want to say, because “Look, he just told me. The probability of outcome A happening is one, just like the probability of outcome B happening.” They’re both going to happen definitely on some branch or other of the universe.

Hilary Greaves: So yeah, many of us ended up thinking the right way to think about this is maybe to take a step back and ask what we wanted from the notion, or we needed from the notion of probability in quantum mechanics in the first place, and I convinced myself at least that we didn’t in any particularly fundamental sense need that the chance of outcome A happening was a half. What we really needed was for it to be rational to assign weight one half to what would follow from outcome A happening. And it be rational to assign weight to one half to what would follow if and where outcome B happened. So if you have some measure over the set of actually future branches of the universe, and in a specifiable sense the outcome A branches total measure one half and the outcome B branches total measure one half, then we ended up arguing you’ve got everything you need from probability. This measure is enough, provided it plugs into decision theory in the right way.

Robert Wiblin: Yeah. How controversial is this multiverse interpretation?

Hilary Greaves: It depends what community your question is relativized to. I think among physicists it’s rather uncontroversial amongst physicists who have thought about the question at all, but possibly not for very good reasons. The thing that physicists get from Many Worlds quantum mechanics that they don’t get from any of the other things that are on the menu as options for a foundationally coherent interpretation of quantum mechanics is that you don’t have to change the existing equations of physics if you go for a many worlds interpretation. If you go for the other alternatives, so a so called pilot wave theory, or a dynamical collapse interpretation, you’re actually changing the physics in measurable ways.

Hilary Greaves: So if you’ve been through physics undergrad, and you’ve been through physics grad school, and you’ve built a career based on working with the existing equations of physics, then you’ve got an obvious reason to kind of like it, if you’ve got a coherent foundational story that’s consistent with all that stuff. So there’s that maybe not epistemically very weighty reason for physicists to prefer the many worlds interpretation, and a lot of them are very happy to go along with that. If you are instead asking about philosophers of physics, then it’s much more controversial and very much a minority view.

Robert Wiblin: Alright. What’s the argument for not thinking it’s a multiverse?

Hilary Greaves: Well, one of them is that probability doesn’t make sense.

Robert Wiblin: Okay.

Hilary Greaves: So like we said, [crosstalk 00:08:38].

Robert Wiblin: That also doesn’t seem very virtuous, yeah.

Hilary Greaves: Right. Yeah. I mean you’re asking the wrong person maybe for a sympathetic exposition of why people think this is a bad theory.

Robert Wiblin: Okay yeah, because you don’t hold this view. Does it seem-

Hilary Greaves: Also, I’ve been somewhat out of this field for the last ten years.

Robert Wiblin: Right, yeah.

Hilary Greaves: Back when I was working on this stuff, probability was one of the main bones of contention. I ended up kind of feeling that I and my co-researchers had solved that problem and moved on, but then I stopped listening to the rest of the debate. So I don’t know how many people are now convinced by that stuff.

Robert Wiblin: Yeah. Do you feel like it’s an open and shut case from your point of view? Setting aside the fact that, of course, other smart people disagree, so you’ve got to take their view seriously. If it was just your perspective, would it be pretty clear that it’s multiverse?

Hilary Greaves: So the statement that I am willing to make confidently in this area is I don’t think that considerations of probability generate any difficulty whatsoever for the multiverse theory. There’s some other stuff that I think is more subtle, and actually to my mind more interesting, about exactly what’s the status of the branching structure and how it’s defined in the first place. I ultimately think that’s probably not problematic either, but I think there are a lot of things there that could be helpfully spelled out more clearly than they usually are, and I think the probability stuff is, yes, an open and shut case.

Robert Wiblin: So it sounds like you’ve made a pretty major switch in your research focus. How hard is that, and how uncommon is that in academia?

Hilary Greaves: Good question. Yeah, it’s quite uncommon, and the career incentives quite clearly explain why it would be uncommon, because academia very much rewards lots of publications, lots of high quality publications, and if you’ve already got a research career going in one area, it’s always quite easy to generate another equally high quality paper following on the line of research that you already embarked on. Whereas, if you switch to a totally different area, as I did and as many others have done, there’s a pretty long fallow period where you’re basically reduced to the status of a first year graduate student again, and when people ask you, “What are you working on?” Your answer is no longer, “Oh, I’ve got three papers that are about to be published on X, Y, and Z.” And it’s more, “Uh, I’m not really working on anything as such right now. I’m just kind of looking around, learning some new things.”

Hilary Greaves: So I had a quite embarrassing period of two or three years where I would try and avoid like the plague situations where people would ask me what I was working on, because I felt like the only answer I had available was not appropriate to my level of seniority, since I was already tenured. So in that sense, it’s kind of tricky, but I think if it’s something that you really want to do, and if you’re willing to bear with that fallow period, and if in addition you’re confident, or maybe arrogant enough to have this brazen belief that your success is not localized to one area of academia, you’re just a smart enough person that you’ll be successful at whatever you take on, so this is not a risk, it’s just a matter of time, then it’s definitely something that you can do.

Hilary Greaves: And I’d encourage more people to do it because, at the end of the day, if you’re not working on the things that you’re excited about, or the things that you think are important, then you might as well not be in academia. There are lots of other more valuable things you could do elsewhere.

Robert Wiblin: Yeah. Did you deliberately wait until you had tenure to make the switch? Would you recommend that other people do that?

Hilary Greaves: In my case it was, honestly, it was not deliberate. And because of that I feel it would be a bit inappropriate for me to try and advise people who don’t yet have tenure whether they should do a similar thing, because it definitely slows down your publication record for a good few years.

Robert Wiblin: Yeah. That just puts you at risk of not being able to stay in.

Hilary Greaves: Yeah. I mean there’s a kind of halfway house you could go for, where you keep up your existing line of research, but you devote some significant proportion of your time to side projects. That’s the model that I’ve seen lots of graduate students successfully pursue. And I think that’s probably good, even from a purely careerist perspective, in that you end up much more well rounded, you end up knowing about more things, having a wider network of contacts and so forth, than if you just had your narrow main research area and nothing else.

Robert Wiblin: Yeah. Do you think you learned any other lessons that would be relevant to people who want to switch into doing global priorities research but aren’t currently in it?

Hilary Greaves: Maybe depends what other thing they are currently in. I did find that some of the other areas of research, some of the particular other areas of research that I happened to have worked in previously, involved learning stuff that usefully transferred into doing global priorities research, like my work in both philosophy of physics and formal epistemology had given me a pretty thorough grounding in decision theory that’s been really useful for working in global priorities research and, at a more abstract level, having worked in physics meant that I was coming into global priorities research with a much stronger mathematics background than a philosopher typically might have, and one thing that’s meant is that it’s been much easier for me to dive into interdisciplinary work with economists than some other philosophers might find it. But these reasons are quite idiosyncratic to the particular things I did before. I’d expect that you’d always find particular things from your other area of research that were useful, they’d just be different things obviously, if you had a different previous background area.

Robert Wiblin: So yeah, what is formal epistemology?

Hilary Greaves: So formal epistemology in practice more or less lines up with a Bayesian way of thinking about belief formation. Where, instead of thinking in terms of all out beliefe, like, “I believe that it’s raining,” or “I believe that it’s not raining”, you talk instead about degrees of belief. So this is most natural in the case of things like weather forecasts, where it’s very natural to think in probabilistic terms. You know, if the question is not whether it’s raining now, but whether it will rain tomorrow, weather forecasters typically won’t say, “It will rain tomorrow.” Or, “It won’t rain tomorrow.” They say, “The chance of it raining tomorrow is 30%.” Or whatever. So in Bayesian terms, you would report your degree of belief that it will rain tomorrow as being 0.3 in that kind of context.

Hilary Greaves: So then formal epistemology is concerned with the rules that govern what these numbers should be and how they should evolve over time. Like if you get a new piece of information, how should your degree of belief numbers change over time? So the work I did on formal epistemology was mostly on a bunch of structural questions about how these degrees of belief should be organized, and how you justify the normative principles that most people think are correct about how they should be organized.

Robert Wiblin: So, isn’t it just Bayes’ theorem? I guess there’s challenges choosing priors, but what are the open questions in formal epistemology?

Hilary Greaves: Okay. So Bayes’ theorem itself is just a trivial bit of algebra. The nontrivial thing in the vicinity is the principle of conditionalization, which says that, “Here’s the right way to update your degrees of belief when you get a new bit of evidence, you move from whatever your old credence function was to the one that you get by conditionalizing on the new bit of evidence, and we can write down the mathematical formula that says exactly what that means.” So there’s widespread agreement that, at least in normal cases, that is in fact the rational way to update your degrees of belief. There’s much less agreement about precisely why it’s the rational way to update your degrees of belief. So yeah, we all get the sense that if somebody updates in a completely different way, they’re weird, they’re irrational, there’s something wrong with them. But instead of just slinging mud, we’d like to have something concrete to say about why they’re irrational or what’s wrong with them.

Hilary Greaves: So some of the early work I did on formal epistemology was exploring that question, and I tried to provide, well I guess I did provide, a decision theoretic argument based on the idea of expected utility, but importantly expected epistemic utility, rather than expected practical utility, we could say a bit more about that difference in a minute, for why conditionalization is the uniquely rational updating rule. Basically, the idea was, “Okay, if what you’re trying to do is end up with degrees of belief that are as accurate as possible, they conform to the facts as closely as possible, but you know you’re proceeding under conditions of uncertainty. So you can’t guarantee that your degrees of belief are going to end up accurate.” If you take a standard decision theoretic approach to dealing with this uncertainty, where you’re trying to maximize expected value, but here it’s expected value in the sense of expected closeness to the truth, a coauthor and I were able to prove that conditionalization was the updating rule for you. Any other updating rule will perform worse than conditionalization in expected epistemic value terms.

Robert Wiblin: Interesting. Okay, so normally you have decision theory that’s trying to maximize expected value where you might think about something like moral value, or like prudential value, like getting the things that you want. But here you’ve redefined the goal as maximizing some kind of epistemic expected value, which is like having beliefs or credences that are in correspondence with the world as much as possible, or whereas your errors are minimized.

Hilary Greaves: That’s right, yeah. So just to be clear, the claim is not that, “This should be your goal. You should live your life in such a way as to maximize and to bring a fit between your beliefs and the truth.” That would be a crazy principle. The thought was more, “Actually what we have in play here are two different notions of rationality.” There’s something like practical or prudential rationality, which is the way we normally think about maximizing value. You just decide what the stuff is that you care about and you try to maximize the expected quantity of that thing. That’s the normal notion of rationality. But on reflection it seems that we also have a second notion of rationality, which you might call epistemic rationality, which is about having your beliefs respond to the evidence in the intuitively correct kind of ways, and we wanted to work out what are the principles of this epistemic rationality thing, even when doing what’s epistemically rational might in fact conflict with doing what’s practically rational.

Robert Wiblin: Okay. Well, we’ll stick up a link to that paper. It’s epistemic decision theory, right?

Hilary Greaves: No, that paper is the 2006 one, Conditionalization Maximizes Expected Epistemic Utility.

Robert Wiblin: Oh, okay. And then the later paper on epistemic decision theory, is that just more of the same kind of thing, or is that a different argument?

Hilary Greaves: It’s different. It’s related. So there the issue was, when we think about practical decision theory, there are some cute puzzle cases where it’s not obvious precisely what notion of expected utility we ought to be trying to maximize. So for the cognoscenti here, I’m talking about things like the Newcomb problem, where there’s one action that would cause the world to get better, but would provide evidence that the world is already bad, and there people’s intuitions go different ways on whether it’s rational to do this action. Like, do you want to do the thing that gives you evidence that you’re in a good world, or do you want to do the thing that makes the world better, even if it makes it better from a really bad starting point? So this debate had already been reasonably mapped out in the context of practical decision theory, and what I do in the epistemic decision theory paper is explore the analogous issues for the notion of epistemic rationality.

Hilary Greaves: So I’m asking questions like, okay, we know that we can have causal and evidential decision theory in the practical domain. Let’s write down what causal and evidential decision theory would look like in the epistemic domain, and let’s ask the question of which of them, if either, maps on to our intuitive notion of epistemic rationality. And the kind of depressing conclusion that I get to in the paper is that none of the decision theories we’ve developed for the practical domain seem to have the property that the analog of that one performs very well in the epistemic domain.

Hilary Greaves: I say this is a kind of depressing conclusion because it seems like, in order to be thinking about epistemic rationality in terms of trying to get to good epistemic states, like trying to get to closeness to the truth or something like that, you have to have a decision theory corresponding to the notion of epistemic rationality. So if you can’t find any decision theory that seems to correspond to the notion of epistemic rationality, that seems to suggest that our notion of epistemic rationality is not a consequentialist type notion, it’s not about trying to get to good states in any sense of good states, and I at least found that quite an unpalatable conclusion.

Robert Wiblin: Yeah. So just for the people who haven’t really heard about decision theory, could you explain what are the kind of archetypal problems here that make it an interesting philosophical issue?

Hilary Greaves: Sure, okay. So there’s a well known in the field problem called the Newcomb problem, which pulls apart two kinds of decision theory, which in normal decision situations would yield the same predictions as one another about what you should do. So normally you don’t have to choose between these two different things, and normally don’t realize that there are two different decision theories, maybe at the back of your mind, but here’s the Newcomb problem, and hopefully this will help people to see why there’s a choice to be made.

Hilary Greaves: So, suppose you find yourself in the following, admittedly very unusual, situation; you’re confronted with two boxes on the table in front of you. One of these boxes is made of glass, it’s transparent, you can see what’s in it, and the other one is opaque. So you can see that the transparent box contains a thousand pounds. What you know about the opaque box is that either it’s empty, it’s got no money in it at all, or otherwise it contains a million pounds, and for some reason you’re being offered the following decision. You either take just the opaque box, so you get either nothing or the million pounds in that case and you don’t know which, or you take both boxes. So you get whatever’s in the opaque box, if anything, and in addition the thousand pounds from the transparent box. But there’s a catch, and the catch concerns what you know about how it was decided whether to put anything in the opaque box.

Hilary Greaves: The mechanism for that was as follows; there’s an extremely smart person who’s a very reliable predictor of your decisions, and this person yesterday predicted whether you were going to decide to take both boxes or only the opaque box, and if this predictor predicted you’d take both boxes, then she put nothing in the opaque box. Whereas, if she predicted that you would take only the opaque box, then she put a million pounds in that box. Okay, so knowing that stuff about how the predictor decided what, if anything, to put in the opaque box, now what do we think about whether you should take both boxes or only the opaque one? And on reflection you’re likely to feel yourself pulled in two directions.

Hilary Greaves: The first direction says, “Well, look, this stuff about whether there’s anything in the opaque box or not, that’s already settled, that’s in the past, there’s nothing I can do about it now, so I’m going to get either nothing or the million pounds from that box anyway, and if in addition I take the transparent box, then I’m going to get a thousand pounds extra either way. So clearly I should take both boxes, because whether I’m in the good state or the bad state, I’m going to get a thousand pounds extra if I take both boxes.”

Hilary Greaves: So that’s one intuition, but the other intuition we can’t help having is, “Well, hang on, you told me this predictor was extremely reliable, so if I take both boxes it’s overwhelmingly likely the predictor would have predicted that I’ll take both boxes, so it’s overwhelmingly likely then that the opaque box is empty, and so it’s overwhelmingly likely that I’ll end up with just a thousand pounds. Whereas, if I take only the opaque box then there’s an overwhelming probability the predictor would have predicted that, so there’s an overwhelming probability in that case that I’ll end up with a million pounds. So surely I just do the action that’s overwhelmingly likely to give me a million pounds, not the one that’s overwhelmingly likely to give me a thousand pounds.”

Hilary Greaves: So, corresponding to those two intuitions, we have two different types of decision theory. One that captures the first intuition, and one that captures the second. So the decision theory that says you should take both boxes is called causal decision theory because it’s concerned with what your actions causally bring about. Your actions in this case, if you take both boxes, that causally brings about that you get more money than you otherwise would have. Whereas, if you say you should only take the one box, then you’re following evidential decision theory, because you’re choosing the action that is evidence for the world already being set up in a way that’s more to your favor.

Robert Wiblin: Yeah. So what do you make of this?

Hilary Greaves: Well, I’m a causal decision theorist, and most people who’ve thought about this problem a lot, I think it’s fair to say, are causal decision theorists, but that’s by no means a universal thing. This problem remains controversial.

Robert Wiblin: So, my amateurish attitude to this is, well, causal decision theory seems like the right fundamental decision theory, but in particular circumstances you might want to pre-commit yourself to follow a different decision theory on causal grounds, because you’ll get a higher reward if you follow that kind of process.

Hilary Greaves: Yeah.

Robert Wiblin: Does that sound plausible?

Hilary Greaves: It’s definitely plausible. I mean, pre-committing to follow decision theory X is not the same action as doing the thing that’s recommended by decision theory X at some later time. So it’s completely consistent to say that, like in the Newcomb problem for example, if I knew that tomorrow I was going to face a Newcomb situation, and I could pre-commit now to following evidential decision theory henceforth, and if in addition the predictor in this story is going to make their decision about what to put in the box after now, then definitely, on causal decision theory grounds, it’s rational for me to pre commit to evidential decision theory, that’s completely consistent.

Robert Wiblin: Yeah. So I find all of this a little bit confusing, because I guess I don’t quite understand what people still find interesting here. But I guess if you’re programming an AI, maybe this comes up a lot, because you’re trying to figure out what should you pre-commit to, what kind of odd situations like this might arise the most such that maybe you should program an agent to deviate from causal decision theory, or seemingly deviate from causal decision theory, in order to get higher rewards.

Hilary Greaves: Yeah, that seems right. I think the benefit that you get from having been through this thought process, if you’re theorizing in the AI space, is that you get these insights like that it’s crucial to distinguish between the act of following a decision theory and the act of pre-committing to follow that decision theory in future. If you’ve got that conceptual toolbox that pulls apart all these things, then you can see what the crucial questions are and what the possible answers to them are. I think that’s the value of having this decision theoretic background.

Robert Wiblin: Yeah. There are a few other cases that I find more amusing, or maybe more compelling, because they don’t seem to involve some kind of reverse causation or backwards causation in time.

Hilary Greaves: Wait, hang on, can I interrupt there?

Robert Wiblin: Oh, go for it, yeah.

Hilary Greaves: There’s no backwards causation in this story. That’s important.

Robert Wiblin: Yeah.

Hilary Greaves: But, you know, I’m quite good at predicting your actions. For example, I know you’re going to drink coffee within the next 10 minutes. There’s nothing sci-fi about being able to predict people’s decisions and, by the way, you’re probably now in a pretty good position to predict that I would two-box if I faced the Newcomb problem tomorrow.

Robert Wiblin: Yeah.

Hilary Greaves: You are very smart, but you didn’t have to be very smart to be in a position to make that prediction.

Robert Wiblin: Yeah.

Hilary Greaves: So there’s nothing … People often feel the Newcomb problem involves something massively science fictional, but it’s really quite mundane actually. It’s unusual, but it doesn’t involve any special powers.

Robert Wiblin: Yeah, okay. So I agree with that. On paper, it doesn’t involve any backwards causation, but I guess I feel like it’s messing with our intuitions, because you have this sense that your choice of which boxes to pick is going to cause somehow, like backwards in time, cause them to have put a different amount of money in the box. So I feel like that’s part of why it seems so difficult, it because it’s like it’s building into it this intuition that you’re causing, that you can effect today what happened yesterday. Do you see what I’m getting at? I guess if you totally disavow that-

Hilary Greaves: I think I see what you’re getting at, I just don’t think that’s a correct description of … I mean, maybe you’re right as a matter of psychology that lots of people feel that’s what’s going on in the Newcomb problem, I just want to insist that it is not in fact [crosstalk 00:27:28].

Robert Wiblin: That’s not how it’s set up. Okay, yeah. But other ones where I feel you don’t get that effect as much is the smoking lesion problem and also the psychopath button. Do you just want to explain those two quickly, if you can?

Hilary Greaves: So the smoking lesion I think is structurally very similar to the Newcomb problem, it just puts different content in the story. So the idea here is there are two genetic types of people. One type person has the smoking lesion and the other type of person does not have the smoking lesion. What the smoking lesion predisposes you to, if you have it, is two things. Firstly, it makes it more likely that you’ll choose to smoke, and secondly it makes it more likely that you’ll get cancer, and in this story smoking does not cause cancer, and you know all of this stuff. Your decision in that problem is whether or not to smoke. And there you could have the same two intuitions as the ones I described in the Newcomb problem.

Hilary Greaves: You could think, “Well, I should smoke because, look, either I’ve got the smoking lesion or not, and nothing I do is going to change that fact and I happen to enjoy smoking, so it’s just strictly better for me either way to smoke than not.” That’s the causal decision theorist intuition. Or here the evidential decision theorist’s intuition would be, “No, I really don’t want to smoke, because look, if I smoke then probably I’ve got this lesion-

Robert Wiblin: It’s lowering my life expectancy.

Hilary Greaves: And if I’ve got this lesion, then probably I’ll get cancer. So probably if I smoke I’ll get cancer, and that’s bad. So I’d better not smoke.” I think in that problem, to my intuition, the evidential intuition theorist’s story sounds less intuitively plausible, but I’m not sure why that’s the case.

Robert Wiblin: Yeah. It’s funny because the idea is that smoking in this case doesn’t in fact lower your life expectancy, but it lowers your expectancy of how long you’re going to live.

Hilary Greaves: According to one notion of expectancy, yeah.

Robert Wiblin: Yeah. So in that case you feel like it’s just more straightforwardly intuitive to do the causal thing?

Hilary Greaves: That’s my gut reaction to that case, yeah. I don’t know how widely shared that is.

Robert Wiblin: Yeah, and the psychopath button?

Hilary Greaves: Alright so, in the psychopath button case, imagine that there’s a button in front of you, and what this button does if you press it is that it causes all psychopaths in the world to die. This may, by the way, include you, if you turn out to be a psychopath, but you’re not sure whether you are a psychopath or not. Your decision is whether or not to press this button and your preferences are as follows; you’d really like to kill all the psychopaths in the world, provided you’re not one of them, but you definitely don’t want to kill all psychopaths in the world if you’re one of them. That’s a price that’s too high to be worth paying by your likes. You currently have very low credence that you’re a psychopath, but the catch is you have very high credence that only a psychopath would press this kind of button.

Hilary Greaves: Okay, so now your decision is whether to press this button or not, and the problem is that you seem to be caught in a loop. So if you press the button then, after updating on the fact that you decided to press the button, you have high credence that you’re a psychopath, so then you have high credence that pressing this button is going to kill you, as well as all the other psychopaths. So you have high credence that this is a really bad idea. After conditionalizing on the fact that you’ve decided to press the button, deciding to press the button looks like a really bad decision. But, similarly, if you conditionalize on your having decided not to press the button then, by the epistemic license that you then have, that also looks like a really bad idea, because if you decided not to press the button then you have very low credence that you’re a psychopath, and so you think pressing the button has much higher expected value than not pressing it.

Hilary Greaves: So it looks like, either decision you make, after you conditionalize on the fact that you’ve made that decision, you think it was a really bad decision.

Robert Wiblin: Yeah. What do you make of that one? Because in that case it feels like you shouldn’t press the button, to me.

Hilary Greaves: Yeah. I think this is a decision problem where it’s much less obvious what the right thing to say is. I’m kind of enamored of some interesting work by people like Frank Arntzenius who’ve argued that, in this case, you have to expand your conception of the available acts beyond just press the button and not press the button, and admit some kind of mixture where the equilibrium can be do something that leads to pressing the button with probability one third, or something like that. So Arntzenius’ work argues that, once you’ve got those mixed acts in your space, you can find stable points where you continue to endorse the mixed acting question, even after having conditionalized on the fact that that’s your plan.

Robert Wiblin: Interesting. So this is like when you pre-commit to some probability of doing it?

Hilary Greaves: Yeah. There are things that are deeply unsatisfying about that route, but I’m not sure the alternatives are very much better. We’re now into the space of interesting open problems in decision theory.

Robert Wiblin: So what is the cutting edge here, and are there other decision theories besides causal decision theory or evidential decision theory that you think have something going for them?

Hilary Greaves: Yeah, there’s a few others. So Ralph Wedgwood, one of my colleagues, well, used to be one of my colleagues at Oxford, developed a decision theory called benchmark decision theory, which is supposed to be a competitor to both causal and evidential decision theory. The paper by Frank Arntzenius I just alluded to, explores what he calls ‘deliberational variants’ on existing decision theories, in response to cases like the psychopath button. These are formalizations of what the decision theory looks like in this richer space of mixed acts.

Hilary Greaves: Then as many of your listeners will know, in the space of AI research, people have been throwing around terms like ‘functional decision theory’ and ‘timeless decision theory’ and ‘updateless decision theory’. I think it’s a lot less clear exactly what these putative alternatives are supposed to be. The literature on those kinds of decision theories hasn’t been written up with the level of precision and rigor that characterizes the discussion of causal and evidential decision theory. So it’s a little bit unclear, at least to my likes, whether there’s genuinely a competitor to decision theory on the table there, or just some intriguing ideas that might one day in the future lead to a rigorous alternative.

Robert Wiblin: Okay, cool. Well, hopefully at some point in the future we might do a whole episode just on decision theory, where we can really dive into the pros and cons of each of those. But just to back up, so you were trying to then draw an analogy between these decision theories and epistemic decision theory, and then you found that you couldn’t make it work, is that right?

Hilary Greaves: So I think the following thing is the case. For any decision theory you can write down in the practical case, you can write down a structurally analogous decision theory in the epistemic case. However, there’s no guarantee that the assessment of whether such and such a decision theory fits our intuitions about what’s rational, but there’s no guarantee that those assessments are going to be the same in the practical and the epistemic case. So it could, for example, be that when we’re thinking about practical rationality, our intuitions scream out that causal decision theory is the right approach, but when we’re thinking about epistemic rationality our intuitions scream out that somebody who updates their beliefs according to causal epistemic decision theory is behaving in a way that’s wildly epistemically irrational. There can be that kind of difference.

Hilary Greaves: So there’s the same menu of decision theory options on the table in both cases, but the plausibility of the decision theories might not match, and what I was worried about and maybe concluded in the epistemic case, is that all the decision theories you can write down, by writing down the structural analog of all the existing ones in the practical domain, when you write down the analogs in the epistemic domain, none of them does a very good job of matching what seemed to be our intuitions about how the intuitive notion of epistemic rationality works. So then I just got very puzzled about what our intuitive notion of epistemic rationality was up to, and why and how we had a notion of rationality that behaved in that way, and that kind of thing.

Robert Wiblin: Is there an intuitive explanation of what goes wrong with just causal epistemic decision theory?

Hilary Greaves: Okay, so here’s a puzzle case that I think shows that a causal epistemic decision theory fails to match the way most people’s intuitions behave about what’s actually epistemically rational versus not. So, suppose you’re going for a walk and you can see clearly in front of you a child playing on the grass. And suppose you know that just around the corner, in let’s say some kind of playhouse, there are 10 further children. Each of these additional children might or might not come out to join the first one on the grass in a minute. Suppose though … This example is science fictional, so you have to bear with that. Suppose these 10 further children that are currently around the corner are able to read your mind, and the way they’re going to decide whether or not to come out and play in a minute depends on what beliefs you yourself decide to form now.

Hilary Greaves: So, one thing you have to decide now about what degrees of belief to form is what’s your degree of belief that there is now a child in front of you playing on the grass. So, recall the setup. The setup specified that there is one there. You can see her. She’s definitely there. So, our intuitive notion of epistemic rationality, I take it, entails that it’s epistemically rational – you’re epistemically required – to have degree of belief one basically, or very close to it, that there’s currently a child in front of you.

Hilary Greaves: But, like in all these cases, there’s going to be a catch. The catch here is that if you form a degree of belief one, as arguably you should or something very close to it, that there’s a child in front of you now, then what each of these 10 additional children are going to do is they’re going to flip a coin. And they’ll come out to play if their coin lands heads, they’ll stay inside and not come out if their coin lands tails. Whereas if you formed a degree of belief zero that there’s a child in front of you now, then each of these additional 10 ones will definitely come out to play.

Hilary Greaves: And suppose that you know all this stuff about the setup. Then it looks as though causal epistemic decision theory is going to tell you the rational thing to do is to have degree of belief zero that there’s a child in front of you now, despite the fact that you can see that there in fact is one, and the reason is the way this scenario is being setup, the way I stipulated it, if you form degree of belief zero that there’s a child in front of you now, then you know with probability one, there are going to be 10 more children there in a minute. So, you can safely form degree of belief one in all those other children being there in a minute’s time. So, when we assess your epistemic state overall, yeah, you get some negative points in accuracy terms for your belief about the first child, but you’re guaranteed to get full marks regarding your epistemic state about those 10 other children whereas, if you do the intuitively epistemically rational thing and you have degree of belief one that there’s a child in front of you now, then it’s a matter of randomness for each of the other 10 children whether they come out or not.

Hilary Greaves: So, the best you can do is kind of hedge your bets and have degree of belief a half for either of the other 10 children. But you know you’re not gonna get very good epistemic marks for that because, whether they come out to play or not, you’ll only get half marks. So, when we assess your overall epistemic state, it’s gonna be better in the case where you do the intuitively irrational thing than the case where you do the intuitively rational thing. But causal epistemic decision theory, it seems, is committed to assessing your belief state in this global kind of way. Because what you wanted to maximize, I take it, was the total accuracy, the total degree effect between your beliefs about everything and the truth.

Hilary Greaves: So, I’ve got a kind of mismatch that at least I couldn’t see anyway of erasing by any remotely minor tweak to causal epistemic decision theory.

Robert Wiblin: So, it boils down to: if you form a false belief now, like a small false belief, then the world will become easier to predict, and so you’ll be able to forecast what’s going on or have more accurate beliefs in the future. So, you pay a small cost now for more accurate beliefs in the future whereas, if you believe the true thing now which in some sense seems to be the rational thing to do, then you’ll do worse later on because the world’s become more chaotic.

Hilary Greaves: It’s kind of like that. There’s an inaccuracy in what you just said, which was important for me, but I don’t know if this is getting into nitty gritty researcher’s pedantry. The thing that for me is importantly inaccurate about what you just said is it’s not about having inaccurate beliefs now but then getting accurate beliefs in the future, because if that were the only problem, then you could just stipulate, “Look, this decision theory is just optimizing for-”

Robert Wiblin: The present.

Hilary Greaves: ” … accurate beliefs now.” So, actually what you’re having to decide now is both things. So, you have to decide now what’s your degree of belief now that there’s a child in front of you, and also what’s your degree of belief now about whether there will be further children there in a minute. So, all the questions are at your beliefs now, and that’s why there’s no kind of easy block to making the trade off that seems intuitively problematic.

Robert Wiblin: Yeah. Are you still working on this epistemic decision theory stuff? Or have you kind of moved on?

Hilary Greaves: No. This is a paper from about five years ago and I’ve since moved much more in the direction of papers that are more directly related to effective altruism, global priorities type concerns.

Robert Wiblin: Okay. Yeah. Is anyone carrying the torch forward? Do you think it matters very much?

Hilary Greaves: I think it matters in the same way that this more abstract theoretical research ever matters, and I just tell the standard boring story about how and why that does matter, like … These abstract domains of inquiry generate insights. Every now and then, those insights turn out to be practically relevant in surprising ways. You can’t forecast them, but experience shows there are lots of them. You can tell that kind of story and I think it applies here as well as it does everywhere else. I don’t think there’s anything unusually practically relevant about this domain compared to other domains of abstract theoretical inquiry. If I did, I might still be working on it.

Robert Wiblin: Yeah. This may be a bit of a diversion, but what do you think of that kind of academic’s plea that, whatever they’re looking into, who can say how useful, all basic research is useful and they should just do whatever they’re most interested in?

Hilary Greaves: I think it is implausible if it’s supposed to be an answer to the question, “What’s the thing you could do with your life that has the highest expected value?” But I think it is right as an answer to a question like-

Robert Wiblin: “Why is this of some value?”

Hilary Greaves: Yeah. Why should the public fund this at all, or that kind of question.

Robert Wiblin: Do you have a view on whether we should be doing kind of more of this basic exploratory research where the value isn’t clear? Or more applied research where the value is very clear, but perhaps the upper tail is more cut off?

Hilary Greaves: Yeah. I mean, it depends partly on who “we” is. One interpretation of your question would be do I think the effective altruist community should be doing more basic research, and there I think the answer is definitely yes. And that’s why the Global Priorities Institute exists. That’s kind of our, one way of describing the most basic level of what we take our brief to be is take issues that are interesting and important by effective altruists lights and submit them to the kind of critical scrutiny and intellectual rigor that’s characteristic of academia rather than just writing them up on effective altruist blogs and not getting them into the academic literature, or not doing the thing where you spend one year perfecting one footnote to work out exactly how it’s meant to go. We tend to do more that last kind of thing.

Robert Wiblin: Yeah. Okay. Well, let’s push on and talk about the kind of research that you’re doing now. What are the main topics that you’re looking into and that GPI’s looking into? Or at least that you have been looking into over the last few years?

Hilary Greaves: Sure. GPI’s existed for maybe a year. It’s officially existed for a bit less than that, but that’s roughly the time scale in which we’ve had people working more or less full time on GPI issues. And the first thing we did was drew up a monstrous research agenda where we initially tried to write down every topic we can think of where we thought there was scope for somebody to write an academic research article making rigorous either an existing current of thought in the effective altruist community where effective altruists have kind of decided what they think they believe, but the argument hasn’t been made fully rigorous or there’s an interesting open question where effective altruists really don’t know and don’t even think they know what the answer is, but it seems plausible that the tools of various academic disciplines, and we’re especially interested in philosophy and economics, might be brought to bear to give some guidance on what’s a more versus a less plausible answer to this practically important question.

Hilary Greaves: So, through that lens, we ended up with something like a 50 page document of possible research topics. So, clearly we had to prioritize within that. What we’ve been focusing on in the initial sort of six to eight months is what we call the long term-ism paradigm where long term-ism would be something like the view that the vast majority of the value of our actions, or at least our highest expected value actions, the vast majority of the value of those actions lies in the far future rather than in the nearer term. So, we’re interested in what it looks like when you make the strongest case you can for that claim, and then also what follows from that claim and how rigorous can we make the arguments about what follows from that claim.

Hilary Greaves: So for example, lots of people in the EA community thing that, in so far as you accept long term-ism, what you should be focusing on is reducing risks of premature human extinction rather than, for example, trying to speed up economic progress. But nobody to my knowledge has really tried to rigorously sit down and write down why that’s the case. So, that’s one of the things we’ve been trying to do.

Robert Wiblin: Yeah. How strong do you think the case is for long term-ism? It sounds like you’re sympathetic to it, but how likely do you think it might be that you could change your mind?

Hilary Greaves: Okay, yeah. Definitely sympathetic to it. I’m a little bit wary of answering the question about how likely is it that I think I might change my mind, because even trying to predict that can sometimes psychologically have the effect of closing one’s mind as a researcher and reducing one’s ability to just follow an argument where it leads.

Robert Wiblin: I see.

Hilary Greaves: So, I’m much more comfortable in the frame of mind where I think, “Yeah, okay. Roughly speaking, I find this claim very plausible, but I know as a general matter that when I do serious research, when I sit down, try and make things fully rigorous, I end up in some quite surprising places, but it’s very important that, while I’m in that process, I’m in a frame of mind of just following an argument wherever it leads rather than some kind of partisan motivated cognition.

Robert Wiblin: Yeah.

Hilary Greaves: So, yeah. I think it’s extremely likely that I’ll change my mind on many aspects of how to think about the problem. I don’t really know how to predict what’s the probability I’ll change my mind on the eventual conclusion.

Robert Wiblin: Sure. I guess, what are the main controversies here? What are maybe the weakest points that people kind of push on if they’re wanting to question long term-ism?

Hilary Greaves: Okay. One thing that’s controversial is how to think about discounting future welfare. So, it’s very common in economic analyses of policy recommendations, for example, to at least discount future goods, and that’s very clearly also the right thing to do, by the way, because if you think that people are going to be richer in the future, then a marginal unit of concrete material goods has less value in the future than it does today, just because of considerations of diminishing marginal utility. If you think people are going to be poorer in the future, then the reverse is true.

Hilary Greaves: So, you should either positively or negatively discount future goods relative to present ones. That’s pretty uncontroversial. What’s controversial is whether you should treat welfare in the future as having different weight from welfare in the present. Moral philosophers are more or less unanimous on the view that you should not discount future welfare, and that’s an important input into the case for long term-ism, because if you think you should discount future welfare at, say, an exponential rate, going forwards in time, then even a very small discount rate is going to dramatically suppress the value of possible future welfare we can get summed across all generations. So, you don’t get the kind of overwhelming importance of the far future picture that you get if you have a zero discount rate on future welfare.

Hilary Greaves: So, that will be one of them. Another salient one would be issues in population ethics. So, if we’re talking about premature human extinction in particular, you get the case for thinking it’s overwhelmingly important to prevent or reduce the chance of premature human extinction if you’re thinking of lives that are “lost” in the sense that they never happened in the first place because of premature human extinction. In the same way that you think of lives that are lost in the sense of they got cut short, like people dying early. If you think that those two things are basically morally on a par, so very valuable lives that would contain love, and joy, and projects, and all that good stuff, failed to happen that would otherwise have happened, that’s just as bad if people fail to be born as it is if they die prematurely. If you’re in that frame of mind, then you’re very likely to conclude that it’s overwhelmingly important we prevent premature human extinction, just because of how long the future could be if we don’t get prematurely extinct.

Hilary Greaves: Whereas if you think there’s a morally important sense in which a life that never starts in the first place is not a loss, if you think this is like a victimless situation … because premature human extinction in fact happens, let’s say, this person never gets born, so this person doesn’t exist. So, there in fact is no person we’re talking about here. There is no person who experiences this loss. If you’re in that kind of frame of mind, then you’re likely to conclude that it doesn’t really matter, or it doesn’t matter anything like as much whether we prevent premature extinction or not.

Hilary Greaves: So, those would be two examples of cases where there’s something that’s controversial than moral philosophy that’s gonna have a big impact on what you think about the truth of long term-ism versus not.

Robert Wiblin: Yeah. I guess a third argument that I hear made, maybe even more than those two these days, is the question of whether the future’s going to be good on balance. So, is it worth preserving the future, or is it just very unclear whether it’s going to be positive or negative morally, even taking account the welfare of future people? But maybe that’s less of a philosophical issue, it’s more of a practical issue, so not so much under the purview of GPI.

Hilary Greaves: I think it is under the purview of GPI. I mean, it’s an issue that has both philosophical and practical components, and the philosophical components of it would be under the purview of GPI. So, part of the input into that third discussion is going to be, “Well, what exactly is it that’s valuable anyway? What does it take for a life to count as good on balance versus bad on balance?” So, there are some views, for example, that think you can never be in a better situation than never having been born, because the best possible thing is to have none of your preferences frustrated.

Hilary Greaves: Well, if you’re never born, so you never have any preferences, then in particular you never have any frustrated preferences, so you get full marks.

Robert Wiblin: It’s the ideal life.

Hilary Greaves: If you are born, some of the things you wanted are not gonna happen. So, you’re always gonna have negative marks. There are those kind of views out there and they’re obviously going to be unsympathetic to the claim that preventing human extinction is a good thing. And they tend to be, like that one in my opinion, they tend to be pretty wacky views as a matter of philosophy. But there’s definitely a project there of kind of going through and seeing whether there’s any plausible philosophical view that would be likely to generate the negative value claim in practice.

Robert Wiblin: Yeah. Okay. I’m a bit weary of diving into the discount rate issue, ’cause we’ve talked about that before with Toby Ord on the show, and it seems like philosophers just kind of, sing with one voice on this topic. My background is in economics and I feel it’s just like economists are getting confused about this, they’re confusing an instrumental tool that they’ve started putting into their formulas with some fundamental moral issue, in as much as economists even disagree with not having a more pure time discount rate.

Robert Wiblin: My impression, at least from my vantage point, is that economists are coming around to this, because this issue’s been raised enough and they’re progressively getting persuaded. Is that your perception, or …

Hilary Greaves: To some extent. I think the more foundationally minded economists tend to broadly agree with the moral philosophers on this. So for example, Christian Gollier has recently written a magisterial book on discounting, and he basically repeats the line that has been repeated by both moral philosophers and historically eminent economists such as Ramsey, Harrod, and so forth, of “Yeah, there’s really just no discussion to be had here. Technically, this thing should be zero.”

Hilary Greaves: I think there is still some interesting discussions one could have like, for example, I think the concerns about excessive sacrifice are worth discussing. So this is the worry that, if you really take seriously the proposition that the discount rate for future welfare should be zero, then what’s going to follow from that is that you should give basically all of your assets to the future. You should end up with what’s an intuitively absurdly high ratio of investment to consumption. Something needs to be said about that, and I think there are things that can be said about that. But a lot of them usually aren’t said.

Robert Wiblin: Yeah. I’m interested to hear what you have to say about that.

Hilary Greaves: It is a discussion that I think could do with more having than it normally gets. Philosophers tend to think about the discounting question in terms of what’s the right theory of the good, as philosophers would say. That is, if you’re just trying to order possible worlds in terms of better from a completely impartial perspective, then what’s the mathematical formula that represents the morally correct betterness ordering. That’s one question in the vicinity that you might be asking.

Hilary Greaves: A subtly different question you might be asking is, if you are a morally decent person, meaning you conform to all the requirements of morality but you’re not completely impartially motivated, then what are the constraints on what’s the permissible preference ordering for you to have over acts? So, it’s much more plausible to argue that you could use a formula that has a non-zero discount rate for future welfare if you’re doing the second thing – that’s much more plausible than thinking that you could have a non-zero discount rate for future welfare if you’re doing the betterness thing.

Hilary Greaves: Actually, I beg the question, because I said you’d be thinking of betterness in terms of completely impartial value, and that closes the question, but remove the word impartial and just talk about sort of betterness overall, then it remains I think substantively implausible that you can have a non-zero discount rate for future welfare. But even if you’re asking that second question, the one about what’s a rationally permitted preference ordering over acts, not just rationally permitted, but taking into account morality, then I think there are more subtle arguments you can make for why the right response to the excessive sacrifice argument is not to have something that looks like a formula for value but incorporates discounting future well being. It’s rather to have something that’s more like a conjunction of a completely impartial value function with some constraints on how much morality can require from you, whether those constraints are not themselves baked into the value function.

Robert Wiblin: I guess I’ve never found these arguments from demandingness terribly persuasive philosophically, because I just don’t know what reason we have to think that the true, a correct moral theory or a correct moral approach would not be very demanding. It seems if anything, it would be suspicious if we found that the moral theory that we thought was right just happened to correspond with our intuitive evolved sense of how demanding morality ought to be. Do you have anything to say about that?

Hilary Greaves: Yeah. I’m broadly sympathetic to the perspective you’re taking here, maybe unsurprisingly, but trying to play devil’s advocate a little bit. I think what the economists or the people who are concerned about the excessive sacrifice argument are likely to say here is like, “Well, in so far as you’re right about morality being this extremely demanding thing, it looks like we’re going to have reason to talk about a second thing as well, which is maybe like watered down morality or pseudo morality, and that the second thing is going to be something like-”

Robert Wiblin: What we’re actually going to ask of people.

Hilary Greaves: Yeah. The second thing, it’s gonna be something like what we actually plan to act according to or what is reasonable to ask of people, or what we’re going to issue as advice to the government given that they don’t have morally perfect motivations or something like that and then, by the way, the second thing is the thing that’s going to be more directly action guiding in practice. So yeah, you philosophers can go and have your conversation about what this abstract morality abstractly requires, but nobody’s actually going to pay any attention to it when they act. They’re going to pay attention to this other thing. So, by the way, we’ve won the practical argument.

Hilary Greaves: I think there’s something to that line of response. We shouldn’t be dismissive of where they’re coming from here.

Robert Wiblin: I think that does make some sense. Okay. You’ve written a long review article about discount rates that we’ll stick up a link to if people are interesting in exploring this more. Let’s talk now about population ethics, which is another topic that you’ve written a long summary article on. Yeah, do you just want to … I guess you’ve already explained what the question is, kind of at least one of the debates within population ethics around whether creating more people, or whether we should value the lives of people who don’t exist yet equally to people who are alive now.

Robert Wiblin: What do you see as the main controversies or main uncertainties in population ethics and where did you end up standing after reviewing that literature?

Hilary Greaves: Sure. Okay. In slightly more rigorous terms than the way I presented it a minute ago, the basic question for me in population ethics is, if you’re trying to compare states of affairs that differ from one another over the number of people who ever get to exist, for example, a scenario where humanity gets to exist for another billion years versus a scenario where it goes extinct in 100 years’ time, what’s the right ordering of those possible worlds relative to one another in terms of better and worse?

Hilary Greaves: So, when you first asked this question, the two maybe most obvious answers people might give are: first one, so-called total utilitarianism,the thing we’re trying to maximize here is total welfare summed over all people. So, among other things, other things being equal, the more people are better provided they have positive well being, like they have lives that are worth living.

Hilary Greaves: So, that’s the first option, total utilitarianism in the variable population context, that comes radically apart from a second way of ordering these worlds which you might call average utilitarianism, where the quantity you’re trying to maximize is not total welfare added up across all people, but average welfare summed across all people. So, you can see these come apart because, in a scenario where for example, you have to pay some significant cost now in order to make it the case that humanity survives for longer. In at least some variants of that scenario, the average utilitarian is going to say, “No, this is not a cost worth paying. You drag the average down too much,” whereas the total utilitarian is very likely to say “Basically any cost you come up with is going to be worth paying if it means we get to extend the future of humanity by far enough.”

Hilary Greaves: So, that’s the basic question. When you dig into the details of those theories, you come up with scenarios where total utilitarianism gives a betterness verdict that strikes most people as radically counterintuitive, but then you can also come up with other scenarios where average utilitarianism generates a verdict that most people regard as radically counterintuitive. But furthermore, every other alternative theory you try to write down, so maybe it’s not total, maybe it’s not average, maybe it’s something else. “I’ve got this other great theory three,” it turns out that theory three is also going to have some radically counterintuitive conclusions, so the history of population ethics over the last 30 years has been roughly: first, people try to find a theory that has no counterintuitive conclusions, then they realize that this provably can’t be done, so we now have so-called impossibility theorems where people write down a list of intuitive desiderata, like, “I want my ideal theory to have the following six features,” and then you have a mathematical theorem showing that there is no theory. In mathematical space, there is no theory that has all of these features.

Hilary Greaves: So, we now understand that population axiology is a case of choosing your poison. You have to decide of your initial intuitive desiderata which one you’re least unwilling to give up, and then that will guide your choice of theory. I think you asked me which my favorite theory was at the end of the day. It’s total utilitarianism.

Robert Wiblin: Yeah. Me too. But yeah, maybe you could describe the different poison pills that you might consider taking. What are some other semi-plausible theories?

Hilary Greaves: Okay. Those are two different questions, I think. Let me answer the question. All right, different poison pills. If you accept total utilitarianism, then you’re committed to the so called repugnant conclusion. This is, of course, a totally question begging name. Many people this conclusion isn’t repugnant, but whatever. Anyway, here’s the thing you’re committed to. Consider any state of affairs you like and imagine it to be as good as you like, so you can imagine there existing any number of people and you can imagine their lives being arbitrarily good. Call that state of affairs A. Write down the amount of total welfare that you get in state of affairs A.

Hilary Greaves: Now I can easily come up with an alternative, at least metaphysically possible state of affairs. Usually gets called Z in the literature, so let’s call it Z, which has the following two features. Feature one, nobody has a life that contains more than 0.00001 units of welfare. So, everybody has a life that’s barely worth living, as we say. But the second feature is that state of affairs Z has higher total welfare than state of affairs A. Clearly, I can easily generate a state of affairs that has these two features, just by making the population in Z large enough. So, if Z has 10 trillion trillion trillion, etc. people, by iterating the trillions, I can eventually make the total welfare of Z larger than whatever you said the total welfare of A was.

Hilary Greaves: Okay. So, here we have a situation where you’ve imagined what you want, what you thought of as being an extremely good state of affairs A, and I’ve generated one that, according to total utilitarianism, is better, but in which no individual has a life that’s more than barely worth living. Most people find this conclusion repugnant, so most people take this to be very strong evidence against the truth of total utilitarianism. Not everybody, but at least for most people, that’s their initial reaction.

Hilary Greaves: Okay. So, suppose you’re convinced by that. Suppose you decide, “Right. Total utilitarianism can’t be true, then.” Where else might you look? You might try average utilitarianism. But that just commits you to a different poison. So, suppose now you have two states of affairs, let’s call them this time A and B, and suppose that there’s some set of people that exist in both A and in B. So, there’s a common subpopulation, that is to say. And suppose that for these people, life is exactly as good in B as it is in A. There’s nothing at stake for this subpopulation in the decision between A and B. What’s the difference between A and B? Well, the difference is in what you add to this common sub population.

Hilary Greaves: For state of affairs A, you add a large number of people who have lives that are worth living, they’re positive welfare lives, but they’re a lot less good than the lives of the common subpopulation. In contrast, in state of affairs B, you add a much smaller number of additional people who live lives of just unmitigated misery, pain and torture. So, these are people who would really prefer that they’d never been born. Their lives have negative welfare. What’s the problem here for average utilitarianism? Well, the problem is that clearly I take it, state of affairs A is better than B, because to get state of affairs A, you’ve added some people who are glad to be alive and to get state of affairs B, you instead added some people who wish they’d never been born. So, A’s got to be better than B.

Hilary Greaves: But because the sizes of the added subpopulations were different, it can easily be the case that the large number of positive welfare people you added to A dragged the average down by more than the small number of people with negative welfare that you added to B dragged the average down. That is to say B might have higher average welfare than A, and therefore average utilitarianism would have to prefer B to A, which was intuitively clearly the wrong result. So, that’s one of the standard examples of the poison for average utilitarianism.

Robert Wiblin: It seems like average utilitarianism gets into severe problems whenever it goes into negative territory, because you can imagine a world where everyone is living a very bad life, and then you add some more people who are living unpleasant lives, but not quite as bad as the other people, and that’s pulling up the average, and therefore it’s desirable to add them, which I think-

Hilary Greaves: That’s correct.

Robert Wiblin: … kind of everyone thinks is pretty unappealing.

Hilary Greaves: Okay, good, that’s probably a better example to illustrate the point, ’cause it’s a lot simpler and it makes basically the same point.

Robert Wiblin: Yeah. I guess it also means whether we should say have children or not have children kind of depends on potentially how many aliens there are elsewhere in the universe and how well their lives are going. This would be very morally relevant.

Hilary Greaves: That’s right. And also it depends on how things went in the distant past.

Robert Wiblin: Yeah. Are there variations on average utilitarianism that try to rescue it from these problems?

Hilary Greaves: Not really, no. People try to write down theories that are some kind of hybrid between total and average utilitarianism that are supposed to have the good features of each theory and the bad features of neither, but it doesn’t really work.

Robert Wiblin: Yeah. I think someone actually forwarded me something called market utilitarianism recently, which was meant to be some mix of total and average, but it just seemed quite odd to me. I didn’t understand the appeal at all.

Hilary Greaves: Okay. I’m not familiar with that term.

Robert Wiblin: I think it was non population ethicists working on this. Anyway, okay. So, total is unappealing to quite a lot of people and average seems to have maybe even more severe problems. So, what other options are there?

Hilary Greaves: It’s a good question. You can try the thing we just mentioned. You can try writing down a theory that’s some kind of mathematical combination of total and average utilitarianism. That doesn’t really work, because you either end up just still having all the problems faced by average utilitarianism, at least at sufficiently large populations. Or you end up with a theory that’s radically inegalitarian, so it manages to avoid the repugnant conclusion and the other problems we discussed for average utilitarianism by being radically pro inequalities between people. So, the very best of people counted as much more morally important than the worst off people or something like that.

Hilary Greaves: And that’s … even in the fixed population context, never mind population ethics. That’s a position that nobody considered holding until population ethics came along, and for pretty good reason.

Robert Wiblin: Yeah. Isn’t that perfectionism, this idea that like how good the world is depends on kind of how well the best person’s life went? Or they should get some special consideration?

Hilary Greaves: Yeah, right.

Robert Wiblin: Okay. Yeah, no, I don’t find that appealing, either. It’s interesting. I imagine that is like people whose lives are going very well who advocate for that, but yeah. I thought you might say that there’s attempts to make the person affecting view more palatable-

Hilary Greaves: Yeah, I was gonna say that thing next.

Robert Wiblin: Yeah, okay. Let’s talk about person affecting views.

Hilary Greaves: Okay. All right. So, person affecting views try to make rigorous sense of the idea that bringing an extra person into existence is neither good nor bad for that person and, furthermore, some principle like the following is true. If state of affairs B is better than A, then it must be better than A for at least one person. So, in this kind of view, if A and B just differ by the addition of extra people, so if there’s a common subpopulation and then in B you’ve added some people who don’t exist in A then, even if these extra people have positive welfare, a person affecting theorist will say “Well, B can’t be better than A if the welfare of the common subpopulation is the same in A and B, because these extra people that you’ve added in B don’t count as ever being benefited by being brought into existence. B isn’t better than A for them,” according to somebody who’s thinking in the spirit of person affecting theory.

Hilary Greaves: The problem with this project is just that when you actually try to write down what’s the ordering of states of affairs in a variable population context, that the person affecting theorist is trying to advocate, it’s really hard to do it in a way that doesn’t run into either inconsistency in the sense of, say, cycles, like you can go round in a circle: A is better than B is better than C is better than A. Or you end up committed to massive incompatibilities. So, some version of person affecting theory will say things like, “If A and B have different numbers of people in them, so say in A, precisely one thousand billion people are ever born and in B, precisely one thousand billion and one people are ever born,” then one version of person affecting theory would say that A and B are incomparable in terms of betterness. That, is it’s not the case that A’s better than B. It’s not the case that B is better than A, and it’s also not the case that they’re equally good. They just can’t be compared.

Hilary Greaves: So, you can go for a theory that says that, but then that’s of course really implausible if we add that, in A, everybody has lives of bliss and, in B, everybody has lives of torture. Then clearly we don’t want that much incomparability. So yeah, there was some interesting-sounding ideas that have the person affecting label attached to them, but it’s very unclear what the theory is there. And whenever somebody actually tries to write down a theory matching person affecting ideas, it turns out to look crazy for one or another reason.

Robert Wiblin: Yeah. Is there any way of making intuitive why these theories either produce incomparability or contradictions, or like very odd results?

Hilary Greaves: Well, we could make some steps in that direction. I mean, one thing you can do is try and think through what it would look like if you tried to make rigorous the claim that if you add an extra person, then you make things not better, not worse. You leave them exactly equally as good as they were before. So, suppose we say that’s the principle we’re gonna try and develop into a theory. Then you end up conflicting with the so-called Pareto principle, because take your status quo state of affairs A, now we’ll augment A in two different ways. First, we’ll create state of affairs B1 and then separately we’ll create state of affairs B2. We create B1 by adding a person who has welfare level say 100, and we create B2 by adding a person who has a lower welfare level, let’s say 50.

Hilary Greaves: But in both B1 and B2, everybody who already exists in an A has the same welfare level that they had in A. So in particular, B1 and B2 agree with one another on the welfare of everybody except the additional person. Okay. So now, what do we get? Well, when we compare B1 and B2, it’s obvious by the Pareto principle that B1’s better than B2, because you’ve got a bunch of people who nothing’s at stake and then you’ve got one person who’s better off in B1 than in B2. So, B1 has to be better than B2.

Hilary Greaves: But, yet, the principle we were trying to defend said that B1 is exactly as good as A, and it also said that B2 is exactly as good as A, so now by transitivity of equally-as-good-as, B2 is exactly as good as A, is exactly as good as B1, so B1 and B2 have to be equally as good. But, hang on, we just said B1 is better than B2, so we have a contradiction. So, that’s one example of how this maybe initially plausible sounding principle ends up running into structural trouble.

Robert Wiblin: How do people who support the person affecting view end up dealing with cases where we expect that there will be people, like I’m definitely gonna have a child, they don’t exist yet, and then I could do something now that would make this child have a better life or a worse life. But they’re not alive today. And it seems like you want to have this intuition that if I can do something today that will make this child, who almost certainly will exist in some form, to have a better life, that that would be a good thing to do, or it could be a good thing to do. But it seems like if you’re strongly committed to the person affecting view, only people who are alive right now matter. They would have to say, “No, there’s nothing that you could do now that could benefit your future child that would be morally good.”

Hilary Greaves: Okay, good. That question highlights the fact that there are, there’s another dimension on which you’ve got a choice point of how you make the person affecting view precise. So, you’re interpreting it to mean only people who are alive today have moral importance and, therefore, there’s nothing that I should do motivated by making things better for my future child. So, that’s the so-called presentist theory. That theory’s actually really implausible, quite aside from considerations of population ethics when we think about it.

Hilary Greaves: So, we were talking about discounting, should we assign the same moral weight to people in the future as we did to people in the past. This is a theory that assigns zero moral weight to people in the future. So, that’s the most extreme version of discounting you could possibly come up with. That kind of theory’s going to generate conclusions that are intuitively completely crazy, because it’s going to say things like, “If you can bury toxic waste in one of two ways, one of which will be completely unproblematic in 1000 years time, and one of which will condemn everyone to lives of pain and torture in 1000 years time.” There’s nothing to choose. It doesn’t matter. These people have no moral importance. Do whatever you like. That’s, I take it, completely crazy. If it costs less for us, at least, then there’s an extremely strong case for doing the thing that’s safe. That’s on maybe a more intuitive level why our presentist version of population of person affecting theory is not plausible.

Hilary Greaves: You might try and rescue the theory by dropping the presentism bit of it, so you might say, “Look, I never meant to say that it’s only presently existing people that have moral importance. I meant to say something like: it’s only people who are going to exist regardless of which decision I make that have moral importance.”

Hilary Greaves: If you’re in a decision situation where whether or not this person exists depends on which decision I make now, in that situation you might say the interests of this person don’t plug into the algorithm for how I should make my decision. That way of thinking has a bit more of a subtle relationship to your question about things that might affect your future children’s welfare.

Hilary Greaves: Because some things you might do now make your future child better off without changing the fact that they get born, maybe, but a lot of things you might do now to affect the welfare of your future child will affect not only the welfare of your future child but also which future child you have. Basically anything. Anything that affects which sperm wins the race changes the identity of your future child, and that’s pretty much everything you do.

Robert Wiblin: Because if I’m delayed by a second then it’s a different sperm probably.

Hilary Greaves: Yeah.

Robert Wiblin: That’s the non-identity problem, right?

Hilary Greaves: That’s right.

Robert Wiblin: Which just given the world as it is today, the identity of all future people is incredibly fragile. Even tiny changes to the world now seem to change the identities of almost all future generations.

Hilary Greaves: Yeah.

Robert Wiblin: Is that like a fundamental problem with the theory, or is it just that it doesn’t gel very well with how humans reproduce? Could you imagine a future world in which reproduction is, or the identity of different future agents comparing in different worlds that their identities are not so fragile, and then the theory doesn’t look so unappealing?

Hilary Greaves: Yeah. You could try that. I mean you could say I’ve got a person-affecting theory that generates very implausible results in worlds like ours, but generates quite plausible results in some subset of possible worlds that doesn’t include ours. It’s very hard to see why that would be reassuring. There’s a nearby thing that people sometimes do try to advocate which is moral philosophers generally assume that in order to be acceptable a moral theory has to generate plausible results across all possible worlds.

Hilary Greaves: Some people want to push back on that and say, “No. No. It’s enough if my theory generates plausible results in the actual world and in worlds that are reasonably practically obtainable by things we could do in the actual world.” I think there’s a sensible discussion to be had about whether that kind of restriction is satisfying. A restriction to a subset of possible worlds that does not include the actual world is, I take it, not really going to help any theory.

Robert Wiblin: Yeah. I guess a non-identity problem does seem to be a deal-breaker, I imagine. It’s not consistent with the intuitions of people who wanted to put forward the person-affecting view to begin with, but I guess I feel slightly dishonest pushing that because I don’t think that is the reason that I would reject it.

Robert Wiblin: Imagine that they tried to rescue it by saying, “Well, it’s your child in either case, so it’s close enough. Even though they will be different people because a different sperm produced them, I’m going to say that still in these two situations they count as the same person and so you can compare. Actually, do people try doing that?

Hilary Greaves: Yes. There are more plausible versions of person-affecting view, including some very good recent work to pursue that kind of line. I think the result there is a much more plausible version of the theory.

Robert Wiblin: What do you make of these new attempts to kind of rescue person-affecting views by having a, perhaps, more looser sense of identity?

Hilary Greaves: I think some of the work that’s been done in this space is quite promising. I’m particularly impressed by a paper on something called saturating counterpart relations by Chris Meacham at UMass. He has a sort of quite complicated technical formula for how you’re supposed to line up the people in one state of affairs with the people in another state of affairs when the populations don’t involve exactly the same people but there’s maybe a more or less natural correspondence you could draw between the two populations.

Hilary Greaves: I think the theory he ends up with there is the best theory that I’ve seen in person-affecting spirit. I’m not quite sure what to make of it in terms of overall assessment. I think what he ends up saying at the end of the paper is, “Look, I myself am not really convinced this is a very good theory. I’m just claiming it’s the best person-affecting theory that there is.” I think I’d share that assessment.

Hilary Greaves: It’s definitely worth looking into, but I think having thought through the … The impossibility theorems we talked about earlier still apply here. It remains the case that every mathematically consistent theory of population ethics will have some poison bullet it has to bite, and this is true of the Meacham person-affecting theory no less than it’s true of every other theory. I think my view at the end of the day is the repugnant conclusion is not actually that bad once you’ve seen what the alternatives are.

Hilary Greaves: I don’t think that’s a cut and dried issue. I can see reasonable people going different ways on that.

Robert Wiblin: We’ll stick up a link to that paper so people can check that out. I guess I feel like with the repugnant conclusion what’s going on psychologically is an inability to empathize with many different agents simultaneously, that we can put ourselves in the shoe of one agent that has a very high level of welfare, but we just can’t imagine being a million agents that ought collectively have the welfare of that one very happy agent. It’s a trick that’s being played on us where we can imagine depth of goodness but not width of agency.

Hilary Greaves: I think that’s very plausible as a matter of the psychology. I think maybe another thing that’s going on is when you’re presented with this scenario of life barely worth living, it’s kind of hard to maintain an intuitive grip on the idea that this is a positive thing. Life barely worth living is meant to be worth living. It’s meant to be positive, but your overwhelming intuitive reaction to it is, “That’s really depressing. I hope life will be so much better than that.” I think it’s very hard to screen that off from your thinking about the so-called Z world.

Robert Wiblin: I think another thing that’s going on is people are risk-averse about these things, and having a welfare level that’s so close to zero, that’s so close to easily becoming negative feels pretty unappealing. That you’d want to have a buffer between your welfare level and a life that would be worse than not being alive.

Hilary Greaves: That might well be part of it also.

Robert Wiblin: Is there any way that you’d want to get people to imagine what a life just worth living looks like that is more appealing than, perhaps, what they imagine when they’re feeling like the repugnant conclusion is repugnant?

Hilary Greaves: Well, there’s a bunch of different ways you can do it. In fact there are interestingly different types of thing that would count as a life barely worth living, at least three interestingly different types. It might make a big difference to somebody’s intuitions about how bad the repugnant conclusion is, which one of these they have in mind. The one that springs most easily to mind is a very drab existence where you live for a normal length of time, maybe say 80 years, but at every point in time you’re enjoying some mild pleasures. There’s nothing special happening. There’s nothing especially bad happening.

Hilary Greaves: Parfit uses the phrase muzak and potatoes, like you’re listening to some really bad music, and you have a kind of adequate but really boring diet. That’s basically all that’s going on in your life. Maybe you get some small pleasure from eating these potatoes, but it’s not very much. There’s that kind of drab life.

Hilary Greaves: A completely different thing that might count as a life barely worth living is an extremely short life. Suppose you live a life that’s pretty good while it lasts but it only lasts for one second, well, then you haven’t got time to clock up very much goodness in your life, so that’s probably barely worth living.

Hilary Greaves: Alternatively you could live a life of massive ups and downs, so lots of absolutely amazing, fantastic things, lots of absolutely terrible, painful, torturous things, and then the balance between these two could work out so that the net sum is just positive. That would also count as a life barely worth living. It’s not clear that how repugnant the repugnant conclusion is is the same for those three very different ways of thinking about what these barely worth living lives actually amount to.

Robert Wiblin: Actually, just what about a normal human life where there’s normal ups and downs, but they’re kind of finely balanced, but the positives just barely outweigh the negatives. I would say many people feel like their welfare level overall is fairly close to zero, but mildly positive. It has all of the richness that people associate with a normal human life.

Hilary Greaves: Maybe you’re suggesting like actually quite a lot of us are in a predicament that’s pretty much that third one, I’d suggest. [crosstalk 01:18:43] I sort of feel those people are maybe not giving the positives their due. I mean it’s maybe a matter of temperament in how you assess that.

Robert Wiblin: I just think once you start talking about torture in your life, then I think it brings in other moral intuitions about how very bad experiences might be very hard to outweigh.

Hilary Greaves: That’s a fair comment.

Robert Wiblin: Then just like a normal human life. I think many people probably have very high welfare, but some people are going to be negative and some people are going to be close to zero on this kind of consequentialist view. We’ve canvassed the main theories within population ethics that people give credence to. It seems like given that philosophy is fairly divided across these, we should be somewhat uncertain ourselves, and moral uncertainty is one thing that you’ve looked into a lot.

Robert Wiblin: You were talking earlier about the parliamentary view of how you weigh up different theories, given different credences attached to them. What do you think we ought to do in practice given that we’re uncertain between these different theories and potentially other theories that we haven’t even yet thought of that might be more appealing?

Hilary Greaves: I think there are two sensible things one can do here, not mutually exclusive. One is to find situations where there’s a lot more at stake according to one theory than there is according to another theory. In an extreme case you might find a situation where some theories say A, B, and C are equally good, and the other theories say A is much better than B and C. That seems like a situation where under uncertainty you should do A rather than B or C, even though it might be that there’s nothing to choose.

Hilary Greaves: It might be there’s nothing to choose, but it might be that A was the right thing to do, so let’s go with A. In a more controversial example of that, if you’ve got a situation where some theories say B is a little bit better than A but it doesn’t really matter, and the other theories say A is massively better than B, that also seems like a situation where depending on how the credences pan out and so forth, under uncertainty it’s appropriate to go with A. That’s one kind of thing you can do.

Hilary Greaves: Another kind of thing you can do is, I mean this depends on the extent to which you get to choose your decision problem, but if you can find decision situations where there’s basically unanimity across moral theories, then those are the easy cases. Even under uncertainty we at least know what to do in those cases.

Hilary Greaves: These are all the easy cases, and I think there are just unavoidably going to be much more problematic cases where what it’s appropriate to do under moral uncertainty is going to depend more sensitively on issues about how to handle moral uncertainty that are themselves controversial. I’ve given you the easy ones, but I’m not denying that there are also harder ones.

Robert Wiblin: I guess to bring it back to long-termism, which is where we started, there’s some theories under which long-termism is exactly right and a very important consideration, and other ones under which the long term isn’t important, but it’s also probably not bad to work on, except in as much as you neglect the present. I guess in that case it sounds like you’re saying, “Well, you would give a lot of weight to the long term, because it’s either positive or neutral.”

Robert Wiblin: I suppose there could be other theories under which long-termism is actively bad, and in that case they cancel out, and some more difficult case of how you would … Like what comes out of a moral uncertainty approach.

Hilary Greaves: I think actually the so-called easy case is harder than that gloss gives it credit for maybe, because it’s not enough to establish that under uncertainty the long term is a good thing to work on. As we know very well, we’ve got resource constraints and it’s a competition, so what we actually need to know is whether it’s …

Robert Wiblin: It’s the balance.

Hilary Greaves: … better under uncertainty to work on long-termism than it is to work on, say, global poverty. There I think there’s an interesting and open question of how theory-sensitive the answer to this question is. One of the things we’d like to do at GPI is devote a lot more attention to the question of what your views on long-termism are likely to be if you deviate from the classical constellation of stereotypical utilitarian views in one way or another.

Hilary Greaves: If you think we should discount future welfare, or if you think that the right theory of population ethics is other than total utilitarianism.

Robert Wiblin: Or you care about justice or fairness.

Hilary Greaves: Yeah. For example, like how many such deviations do you have to make before you will be in practice in the world as we find it led away from the conclusion that inference on the long term is the dominant factor for morally laudatory decision making.

Robert Wiblin: Another philosophical issue that I know you’ve looked into is the problem of moral cluelessness. I spoke about that with Amanda Askell a couple of episodes back, and she described it as this problem where you know that you’re going to have big, morally important effects on the long-term future, but you don’t have any idea what they are or whether they’re going to be very positive or very negative, and maybe also that it’s very difficult to figure out what they’re going to be and what value they have. Is that a good way of summarizing the problem of cluelessness?

Hilary Greaves: It’s a good first pass. I think the important issues are a bit more subtle than that, because if the problem was just we don’t know what the effects are going to be, then we haven’t said enough to see why it’s not an adequate answer to just say, “Yeah, sure, there’s uncertainty, so do expected-value theory and there’s your answer.”

Hilary Greaves: When I was thinking about cluelessness, I was worrying about things that wouldn’t adequately be answered by saying, “Well, expected-value theory is the way I deal with uncertainty.” Those were cases where it’s not just that you know that there are going to be large effects of what you do now in the far future, and you don’t know what those effects are going to be.

Hilary Greaves: It’s, furthermore, that you know there are going to be large effects in the far future. You think there’s a good reason to think they’re even going to dominate the expected-value calculation but, because it’s unclear what your credences should be, it’s unclear whether the way in which they should dominate your expected-value calculation is to massively favor doing A, or instead to massively favor doing B. Those are cases where I think even once one’s internalized the lessons of expected utility theory for decision making, you can still feel paralyzed in practical decisions.

Hilary Greaves: I got interested in this because I think that effective altruists in particular face this predicament quite a lot in deciding, for example, whether to fund malaria nets or deworming, or, indeed, whether to fund either of those things at all.

Robert Wiblin: The problem is not just that we don’t know what they are, but that we don’t have any sensible way as far as we can see to attach probabilities to these different outcomes.

Hilary Greaves: Yeah. Roughly.

Robert Wiblin: It’s more of an epistemic issue.

Hilary Greaves: That’s right.

Robert Wiblin: How is the epistemic situation here any different from what we face just all the time? Why is it that it’s hard to give sensible credences in this situation but not in others? Is it because we don’t get any feedback on what impacts we’re having?

Hilary Greaves: Are you contrasting, say, effective altruist decision making with more personal prudential decision making or what’s the contrast you have in mind?

Robert Wiblin: I guess I’m just trying to figure out why do we have the problem of cluelessness with long-termism but not in other cases. I’m trying to get what’s the core of the issue of the difficulty forming proper credences.

Hilary Greaves: Maybe I should talk through a bit what seemed to me the relevant contrast between the cases where I think lots of people have worried that there is a problem, but actually I think there’s no problem, on the one hand, and the cases where even I think there’s a problem on the other hand.

Robert Wiblin: Walk us through that.

Hilary Greaves: Sure. Here are some cases where some people have argued there’s a problem, but I ended up thinking there isn’t really. Each of our actions, even our most trivial actions like clicking a finger or deciding whether to cross the road is going to have significant effects on the far future in ways that we’re completely unable to predict. The case for that claim is most persuasively made, I think, by noting that even our most trivial actions affect the identities of future people.

Hilary Greaves: If I decide to, say, I don’t know, throw a tennis ball across the street in the path of an oncoming car, or help an old lady across the road versus not do those things, by a number of mundane causal processes, I’m going to affect which people exist in the future. Why is that? Well, it’s because of the extreme sensitivity of who exists in the future on things like the precise timing of conceptions and things like that.

Hilary Greaves: If I decide to help somebody across the road, then I slightly change the timing of everything else that person does during their day, including slightly changing the timing of the interactions that that person has with all the other people that they may or may not meet during the rest of their day. Eventually these causal links are going to reach out to people who are destined to conceive a child on the day in question.

Hilary Greaves: If I make it the case that that child is conceived a few fractions of a second earlier or later then I’ve changed the identity of which child gets conceived, and therefore downstream now looking forward to that future child’s life, my choosing to help the person cross the road or not has made the difference between all the things that the actual child does in their life and all the things that the merely possible child – who didn’t in fact exist because of what I decided to do – would have done in their life if they had existed.

Hilary Greaves: That’s the basic case for thinking that, even in the case of the most trivial actions like throwing a tennis ball or helping someone to cross the road, in objective terms the actual effects of my actions consist much more in the completely unpredictable far future effects than they do consist in the predictable, near-term, intended effects of the action.

Hilary Greaves: Some people think that herein lies a problem that leads to something like decision paralysis, because now if I’m trying to decide what to do based on what’s going to have the best consequences, it looks like I’m completely clueless about what’s going to have the best consequences.

Hilary Greaves: My view is that, when the dust settles from that aspect of the debate, we see in the end there’s not really a problem. Because if your credences behave sensibly, according to me, regarding these completely unpredictable future effects then, when you do your expected-value theory the mere possibility that you might make things better in these completely unpredictable ways is just more or less precisely canceled out by the equally plausible mere possibility that you might make things worse in equally unpredictable ways.

Hilary Greaves: Those are cases where I think there isn’t, in the end, any practical, real-world problem for real decision making. But I think things are different where we’re not talking about the mere possibility that by something like some chaotic mechanism we might turn out to make things better, and the mere possibility that by some equally unpredictable chaotic mechanism we might turn out to make things worse.

Hilary Greaves: If we’re instead talking about a decision setting where there are some highly-structured, systematic reasons for thinking there might be a general tendency of my action to make things better, but there might also for some other reasons be a general tendency to make things worse, then I don’t think you get this precise canceling that gives you a license to just ignore the unforeseeable effects when you do your expected-value calculations.

Hilary Greaves: What kind of decision situations am I talking about here? I’m not talking about the effects of helping somebody to cross the road on the identity of future children. I’m talking about things like the more or less guessable, downstream, knock-on effects of, say, funding anti-malarial bed nets. You might worry about, okay, I know that the immediate effects of funding bed nets are positive.

Hilary Greaves: I know that I’m going to improve lives, I’m going to save lives, but I also know that there are going to be further downstream effects of that and also side-effects of my intervention. For example, influences on the size of future populations. While it’s notoriously unclear how to think about the value of future population size, whether it’ll be a good thing to increase population sizes in the short to medium time in the end, or whether that would in the end be a bad thing.

Hilary Greaves: There are lots of uncertainties here. It’s not a case where you could just say, “Well, clearly the credences should be 50-50.” It’s rather a case where there are really complex arguments on both sides, and it’s unclear what credence you should assign to this or that claim at the end of the day. That, I think, ends up being a much more complicated situation where things do not cancel out in expectation, it’s unclear what credences you should have. Yet, whether you should do the action in question depends on this somewhat arbitrary decision it feels like we’re forced to make about precisely what credences to have.

Robert Wiblin: That helps to explain it quite a bit. The reason I wanted to raise it again is I think some people didn’t quite understand why there was a philosophically interesting point here after the conversation with Amanda. They were thinking, “Well, in the second case you can still just maximize the expectation and do the thing that seems best on balance.”

Hilary Greaves: You could do that if you’ve got precise credences, but we’re talking about decision situations where either it’s not clear what credences you should have, and yet what you do depends on what credences you choose to have. Perhaps, we might be talking about a situation where it’s inappropriate to have precise credences. The epistemically appropriate thing might be to have somewhat imprecise credences, but with a range of imprecision that encompasses credences that would tell you to do A and also encompasses credences that would tell you to do B.

Robert Wiblin: Some people might respond to this cluelessness worry by saying “I could just give to organizations that do good in the short term, something like Against Malaria Foundation that saves lives now. I know that that’s good today. Then in the long term, does that work out positively or negatively? I don’t have to say anything in particular about that. I can just say maybe the good and the bad cancels out, but I don’t have any reason to expect it to be bad.” The fact that it’s good in the short run is very appealing and seems like a less risky option. What do you have to say to that?

Hilary Greaves: I think this kind of picture is very natural, and I think it informs a lot of people’s actual EA donation behavior, including in the past informed my own donation behavior. I think it’s precisely because I came to the view that this picture is really importantly misleading, that’s really why I started working on cluelessness.

Hilary Greaves: What I now think about this is what the standard, say, EA charity evaluations are doing is measuring and quantifying, researching those short-term, more immediate effects of the intervention. For example, if you look at the evaluations that GiveWell does for, say, the Against Malaria Foundation, those evaluations talk about in the how many lives are saved in the short term by distributing bed nets.

Hilary Greaves: So far so good, but the lines of thought we’ve been going through in talking about cluelessness according to me have convinced us that when you take the long-term perspective and you take seriously the thought that what you’re ultimately interested in is all of the effects that your actions will have and not just the ones that are nearer in time or easier to measure, actually the total effect of your intervention will be massively dominated by the unforeseen ones.

Hilary Greaves: That bit that you’ve actually measured in your impact evaluation, we know, is going to be massively dominated by the things that you haven’t included in your impact evaluation. Whether the eventual story ends up being that the sign is still the same, so it’s still a good intervention, but the vast majority of the reason why it’s a good intervention is in the stuff that you haven’t measured. Or the other scenario: it ends up being an evaluation that’s net bad, because the stuff that you haven’t measured, on balance, points massively in the other direction.

Hilary Greaves: Now, we don’t know which of those things is the case. The hypothesis that you mentioned which, if true, would justify basing donation behavior very closely on the impact evaluations, is the hypothesis that the stuff that we haven’t measured precisely cancels itself out, at least an expectation, but the problem is that’s really quite implausible. There’s precisely no reason for thinking there’s going to be that very convenient canceling out.

Hilary Greaves: Really the situation is there’s one thing that we have measured. We know pretty much its magnitude, we know its sign. There’s a load of stuff we haven’t measured which is going to make a big difference to the overall equation and really could go either way. I think really your properly considered EA donation behavior should be almost entirely driven by what your best guess is about that stuff that we haven’t measured.

Hilary Greaves: If that’s so, then there doesn’t really seem to be any place in the picture for the impact evaluations that we have, unless you think something like, “Well, the vast majority of the expected value at least goes by causal chains that go via this life-saving behavior, so we can use that as a proxy.” You might try to rescue the importance of the impact evaluations we’ve got in that kind of way. I don’t think that’s completely unreasonable, but I also don’t think it’s a foregone conclusion that the vast majority of the impact goes causally via this. If it does, there’s still a lot of open questions about what the eventual sign is.

Robert Wiblin: I think a lot of people would have this intuition that, although I’ve got my best guess about the long-term effects, and I’ve got my best guess about the short-term effects, and I know so much more about the short-term effects that that’s just what I want to give way to. My best guess about the long-term effects I should just ignore, because there’s no signal there. You’re just saying the amount of importance of that question is so much larger even relative to your ignorance that it’s still a very important part of the decision making process.

Hilary Greaves: Yeah. Right. I’m saying something like that. Normally when we’re being rational, the way we think that we should deal with uncertainty is by following expected value, so there’s all this stuff that we don’t know about. What do we do about that? Well, we figure out what our credences ought to be about the stuff that we don’t know about, and then we implicitly or explicitly do the expected-value calculation.

Hilary Greaves: In this case, the problem we have is that it’s really unclear what our credences ought to be about all the important “unforeseeable” stuff, the stuff that we don’t measure in the impact evaluations. But, what we should do and how we should prioritize our donation behavior, how we should prioritize one intervention against other interventions or which interventions we even count as being good versus bad in expectation depends potentially quite sensitively on those thorny questions about what the rational credences are in this space.

Robert Wiblin: There’s a whole lot of people who are basing their charitable giving now on the measurable near-term effects of their actions. What kind of advice do you have to give to them, if any, on the basis of this kind of philosophy?

Hilary Greaves: That’s a good question. Maybe I should say upfront I’m also one of those people. I do in fact base my own decisions on GiveWell’s recommendations. Clearly I don’t think it’s completely crazy behavior. What do I think’s the best way of defending that behavior?

Hilary Greaves: I think for that kind of behavior to make sense once you’ve taken onboard worries about cluelessness, you have to be making a judgment call that, in the long run, the net effect of this intervention – most of the important effects of this intervention in expectation – go via a causal chain that at least proceeds via the thing that the GiveWell impact evaluation is focusing on. In this case it proceeds via lives saved in the short term, rather than being some kind of side effect that doesn’t even go through that route.

Hilary Greaves: You have to think that in order to think that this calculation quantifying how many lives get saved is any kind of important guide to what’s going on with the overall grand calculation that we’re implicitly trying to do. Secondly, you have to think that the sign of the value of this intervention will be preserved if you did the grand calculation rather than just the very near-term effects.

Hilary Greaves: You have to be deciding against hypotheses that say things like, “Saving lives is positive value in the short term, but in the longer term my view is on balance that saving lives is, say, bad because it increases population and the world’s overpopulated anyway. On balance, things that increase population are net negative.” If you have a view like that, then I don’t think you should take GiveWell’s calculations as a direct guide to your donation behavior.

Hilary Greaves: If anything, you’d want to do something like pick the intervention that saves the fewest lives, like-

Robert Wiblin: That should be more straightforward, I guess.

Hilary Greaves: I won’t go further down that road, but you can see where this is going. I think there are defensible views that have a central place in them for using the GiveWell recommendations and calculations but that we should have our eyes open to the fact that we need to make or are implicitly making those not completely obvious judgment calls when we’re doing that.

Hilary Greaves: The other thing that I think has changed for me through worrying a lot about this cluelessness stuff is that I have much less confidence now than I used to about how well-placed I am to make cross-cause comparisons on the basis of calculations like GiveWell’s. I used to think suppose I’m comparing Against Malaria Foundation to some other charity that reduces animal suffering by a quantified amount.

Hilary Greaves: I used to think I could just make my judgment calls about how important I thought saving one human life was versus reducing the amount of animal suffering by one cow-year, and then do the tradeoff between these two charities appropriately. I no longer think I’m in any position to do that, or at least to do that by those means, because if I’m thinking of the number of lives saved by a AMF calculation as just a proxy for the amount of value that’s being generated in the long run by funding AMF, knowing how to tradeoff one human life saved against one animal year of suffering doesn’t help me to make the comparison between these two charities.

Hilary Greaves: I would have to also have some hypothesis in place about what the correlation is between one human life saved in the short term and how many units of value there are in expected terms in the grand AMF calculation, like the big one that takes into account all the unforeseen effects and tries to average over the credences before I’d be in any position to do the cross-cause prioritization. So it’s made me a lot more modest, maybe, or something like that, about the value of these impact evaluations in the cross-cause arena.

Robert Wiblin: What’s the cutting edge on this problem? Are we any closer to a solution than when we conceived of it?

Hilary Greaves: Not as far as I know. I wrote my paper on this maybe a year or so ago, so it was quite recent for me. There hasn’t been very much time on academic timescales for massive subsequent progress since then. I think Amanda Askell’s work on this is very interesting. I know she’s been thinking a lot about the value of information and how that will plug into this debate. I guess you already talked about that stuff with her.

Robert Wiblin: Yeah. At the end of the day do you think we’re probably just going to have to muddle through and make our best guesses about what the effects of our actions will be?

Hilary Greaves: Basically, yes. I think we’re just in a very uncomfortable situation where what you should do in expected-value terms depends with annoying sensitivity, or with unfortunate sensitivity, on what feels like a pretty arbitrary, unguided choice about how you choose to settle your credences in these regions where rigorous argument doesn’t give you very much guidance.

Hilary Greaves: Maybe in the end the discussion just highlights the value and the unavoidableness of slightly more flying-by-the-seat-of-your-pants reasoning where you have to make a practical decision, and the scientists haven’t told you exactly what the answer is. Sorry, you have to rely on sensible judgment and just hope that it’s sensible rather than not sensible.

Robert Wiblin: On this topic of having to go with your gut a bit, there’s quite a lot of people over the years who’ve cited risk aversion about their impact as an argument against focusing on the very long-term future. I’ve heard that a bunch of you at GPI have been looking into that. When you try to formalize that, it seems like you can make the argument that risk aversion is a reason to be more favorable to work on the long term. Can you discuss the problem there and how you reached that somewhat counterintuitive conclusion?

Hilary Greaves: Yeah. I hope it’s not a counterintuitive conclusion once you’ve thought through the reasoning a bit more carefully. I think to get the idea that risk aversion is a reason not to work on the long-run future, whether or not that’s going to follow is going to depend on what you’re risk-averse with respect to.

Hilary Greaves: If you think, “I’m risk-averse with respect to the difference that I make, so I really want to be certain that I, in fact, make a difference to how well the world goes,” then it’s going to be a really bad idea by your lights to work on extinction risk mitigation, because either humanity is going to go extinct prematurely or it isn’t. What’s the chance that your contribution to the mitigation effort turns out to tip the balance? Well, it’s minuscule.

Hilary Greaves: If you really want to do something in even the rough ballpark of maximize the probability that you make some difference, then don’t work on extinction risk mitigation. But that line of reasoning only makes sense if the thing you are risk-averse with respect to was the difference that you make to how well the world goes. What we normally mean when we talk about risk aversion is something different. It’s not risk aversion with respect to the difference I make, it’s risk aversion with respect to something like how much value there is in the universe.

Hilary Greaves: If you’re risk-averse in that sense, then you place more emphasis on avoiding very bad outcomes than somebody who is risk-neutral. It’s not at all counterintuitive, then, I would have thought, to see that you’re going to be more pro extinction risk mitigation.

Robert Wiblin: Because extinction would be such a big shift in the value, that preventing that is like reducing the variability of the potential outcomes.

Hilary Greaves: The argument doesn’t even require noting that there’s a big difference in value between premature extinction and no premature extinction. It just requires noting that premature extinction is the worst of the two outcomes. Risk-aversion is going to tip you in the direction of putting more effort into avoiding that worse outcome.

Robert Wiblin: I almost want to push back and say even if you are focused on, yeah, you want to reduce the variance on the impact that you have, morally speaking, that it’s not clear that working on extinction risk isn’t the same or less risky than working on, say, economic development. It feels like people want to say, “Well, saving lives through bed nets is less risky, because I know that I’m going to bank the initial impact of saving someone’s life, which is quite measurable.”

Robert Wiblin: Then of course their actual impact in the long term is still extremely variable because we have to think about what is the long-term effect of that action going to be. In a sense that’s no more predictable, maybe even as-

Robert Wiblin: Going to be. In a sense that is a normal predictable, maybe it’s even less predictable then the impact of trying to reduce the risk of extinction.

Hilary Greaves: Yeah, that makes sense.

Robert Wiblin: The reason I raise this is that I think the people that who I’ve met who are worried about risk aversion do seem to be worried about reducing the variance, to the uncertainty about their personal impact.

Hilary Greaves: Okay. Now I’m making things up as I go along. But what’s the picture in the minds of these people? Is it, “Well, I’m risk averse.” Here’s a way I can imagine this line of thought going on, and I don’t think it’s the one you’re referring to, though. But here’s a version of it: “I’m risk averse with respect to the difference I make. I want to reduce the variance in the possible differences I make across states of nature, and so I favor working on extinction risk mitigation, because I can be pretty sure I’m going to have no impact.”

Robert Wiblin: Right, right.

Hilary Greaves: I mean there’s a small chance I’ll have this massive impact, but overwhelmingly likely-

Robert Wiblin: I see, yeah.

Hilary Greaves: I’ll have no impact. Now on some that’s a constancy across states of nature, so a risk averse person should really like that.

Robert Wiblin: Yeah

Hilary Greaves: You see where I’m going with that.

Robert Wiblin: Yeah, no.

Hilary Greaves: I take it, that’s not supposed to be the line of thought.

Robert Wiblin: No, that’s not quite it. I think that it’s that the want to have a positive impact, but they think there is declining returns, to have like more positive impacts? They want to have a high level of confidence of having a small positive impact over a tiny probably of having a much larger impact. I think that’s the intuition.

Hilary Greaves: So what are these people think about funding bednets again? They think that this is good by risk aversion lights? Or bad by risk aversion lights?

Robert Wiblin: I think that the argument is typically made that this is an argument in favor of doing that has positive near-term effects, like saving lives, because you have a high level of confidence that there’s at least in the short run, will have a positive effect.

Robert Wiblin: And then in the long term effect, well that just all kind of cancels out. And we’re just going to ignore that and say that, well because we have a high degree of confidence, in this positive short run impact, that like like good by risk aversion perspective.

Hilary Greaves: Okay, I guess you won’t be surprised since you know I work on cluelessness, so you won’t be surprised to hear that that sounds like a dubious line of reasoning to me. But we can’t just ignore the long run effects…

Robert Wiblin: Yeah

Hilary Greaves: …in the malaria net case. You need to actually be committed to a view that the long run stuff is going to cancel out to run that line of argument. But I don’t think there’s any reason-

Robert Wiblin: To think that it will.

Hilary Greaves: To think the long run stuff is going to cancel out. I mean it’s very unclear what it is going to do, but I’m pretty sure that it’s not going to cancel out.

Robert Wiblin: Yeah, okay. I think that’s right, if you do for the long term effects, both of these – work on extinction risks versus working on saving lives – both have huge and uncertain long term effects, and the difference in the riskiness of these two different actions is pretty small. It’s only a drop in the bucket.

Hilary Greaves: No, that’s a good point. I think in simple models of this, we try to kind of pretend that the impact of working on say, global poverty, is known, but as you rightly point out, it isn’t actually.

Robert Wiblin: Yeah.

Hilary Greaves: So a more realistic model would have to take that into account.

Robert Wiblin: So, in talking about moral uncertainty earlier with population ethics, you talked about this approach to dealing with known as the parliamentary approach, where you imagine you have politicians elected to represent these different moral perspectives, and the number in the parliament is in some sense proportional to the credences you give to different theories? And then they kind of debate and trade among themselves. Could you explain what’s appealing to that approach? As opposed to maximizing the expected value where you have credences across different values that you might place on different outcomes? Given the different moral theories?

Hilary Greaves: The standard way people in the literature have tended to think about the problem of appropriate action under moral uncertainty is basically to just help themselves off the shelf to the standard way of thinking action under certainty in general. That is, maximize some conception of value. So, in the case of moral uncertainty, this would mean you would have each of your candidate moral theories – the ones you have non-zero credence in – have each of those moral theories assign moral values to the various outcomes. How morally good or bad they are. And then write down your credences and the various theories maybe have credence one half in one theory, credence nought point three in one, and credence nought point two in the one in a third one. And then you just maximize the expectation value of moral value, according to your credences in these theories. That’s the standard approach.

Hilary Greaves: Then there are various things people don’t like or at least some people don’t like about that approach. One is that it doesn’t give you well defined answers unless you have a well defined notion of inter-theoretic comparison of values. So there has to be a factor of a matter about how important moral theory 1 says the difference between A and B is, compared to how important theory 2 says the difference between A and B is. And some people have been skeptical of that kind of inter-theoretic comparison on value difference, can make any sense, because look theory 1 isn’t going to tell you anything about how important it thinks this compared to what theory 2 thinks. And neither is theory 2. So what ever the true moral theory is, the true moral theory is not going to give you inter-theoretical comparisons, and so many people have worried that there is no other place for them to come from.

Hilary Greaves: So that’s one reason why people don’t like maximizing expected moral value as an approach to moral uncertainty.

Hilary Greaves: A second reason is that some moral theories are structurally not amenable to treatment by this approach, so in order to maximize expected value, you have to have each of the moral theories you have non-zero credence in basically having a numerical representation. So it has to be mathematically well behaved. It can’t do things like say that there are cycles of betterness: A is better than B is C is better than A. And most people have very low credence in theories that are structurally awkward in that sense. But there are cogent philosophical defenses of them out there in the moral philosophy literature. Some people are sympathetic to these theories despite having thought about it a lot. So it seems like, if you are even slightly epistemically humble, then you should have at least non-zero credence in these maybe a little bit wacky structurally awkward theories.

Hilary Greaves: And then the problem is, you just can’t handle that within a maximized expected moral value approach to moral uncertainty. As soon as you’ve got non-zero credence in some theory, you have to have it being mathematically well defined, in the sense of numerically representable. Otherwise the maximization of expected moral value formula just can’t deal with it.

Hilary Greaves: So there’s been this desire for an approach to moral uncertainty that has a broader tent of theories that it can handle.

Hilary Greaves: And then, thirdly, there are some decision situations where, under moral uncertainty, intuitively it feels like you want to hedge your bets and choose an option that’s maybe halfway between the things that’s recommended by one theory, and the thing that’s recommended by another theory. And again there are structural reasons why you can’t get that intuitive verdict from a maximization of expected moral value approach.

Hilary Greaves: So these have all been have all been held up at various time as problems that people have hoped a parliamentary style model might be able to solve. But the parliamentary model, as far as I know to date, has not been made very precise. It’s kind of often left at the level of this metaphor of the moral theories as participants in a discussion.

Hilary Greaves: So what I was doing in my talk on this was just fleshing out one way, one sort of plausible looking way you might try to make the parliamentary model precise using tools from bargaining theory. And then investigating, at least under that way of making it precise, is it really true that it behaves better than the standard approach on the dimensions people had hoped that it did.

Robert Wiblin: And does it?

Hilary Greaves: No. It’s difficult to know what lesson to take from that, because of course, I don’t have any proof that the particular idea of making the vague idea precise that I worked with is the best one.

Robert Wiblin: I see.

Hilary Greaves: It seemed the most natural one. It was the easiest one to formulate by taking existing tools off the shelf, like reading a bargaining theory textbook and applying standard ideas.

Hilary Greaves: But it may be that there’s some other way to make the basic intuitive idea of the parliamentary idea model precise. Though it would lead to a theory that performs better. So that’s where the research has got to at the moment. We’ve got one way of making it precise that it looks like it leads to a not very promising theory, and I think that throws down the gauntlet to try and find some better way of making it precise, or otherwise to just abandon the idea and just accept that we have to live with the problems of maximized expected value approach.

Robert Wiblin: Thinking bout the parliamentary approach. It seems like what it’s going to do is produce something like a Pareto optimum for the different theories? Compared to one another? It will produce something where, if a theory is made better off than another theory isn’t. It isn’t made worse off than you definitely get those trades being made. Possibly in as much as the theories themselves give themselves guidance as to how to trade off different outcomes, then you get some trades that are not Pareto improvements, but they are improvements by the likes of those different theories.

Robert Wiblin: Although, oddly, it seems like which ones of those you would get depend on which decisions are in are being considered at any point in time. Because you can only trade between different choices and different actions that are under consideration in given situations. It becomes very circumstantial, what you prefer. Does that-

Hilary Greaves: Yeah, I’m not even sure that you get the Pareto dominance principle you want actually, on the particular of making it precise that I was dealing with. I was investigating what happens if you say, okay, “we are going to pick the Nash bargaining solution.” That jargon will only make sense with people who know bargaining theory. But anyway, in that approach, if you’ve got one or more theories that, say, don’t care between options. I guess to put that point into more intuitive terms, that theory has no incentive to agree to the Pareto dominating option. So that theory can just choose to be contrarian and say, “No, I’m not going to agree, I’m going to lump for the Pareto-inferior one.”

Robert Wiblin: I see

Hilary Greaves: So that might be an intuitive reason why you might not even get Pareto dominance. Unless every theory thinks it’s better, then of course, then you’ll get unanimous agreement.

Robert Wiblin: Yeah

Hilary Greaves: But it’s just that some think it’s better, and no theory thinks it’s worse-

Robert Wiblin: I see

Hilary Greaves: Then it’s a little harder to get the result you want from the bargaining theory approach then it from a decision theoretical approach.

Robert Wiblin: Okay, yeah. Are you optimistic about other people who are looking to this and trying to improve it? Do you think that this can be made a theoretically really appealing approach? Or is it always going to be a hacky, practical solution given that we haven’t found something that works better in principle?

Hilary Greaves: I don’t think it’s theoretically any messier than maximize expected moral value. Maximized expected moral value is already quite messy when you go into the details. If you want actually apply it to a practical decision situation, and therefore you have to subtly disuse of the theoretically comparisons, the formulas you have to use, to subtly inter-theoretic comparisons are pretty messy in practical context. And the bargaining theory approach that I was looking into also ends up being messy but in very similar ways. And I think that, as far as messiness goes, and hackiness, theoretical niceness, they pretty much tell the same story. They kind of look relatively well defined and neat and pretty at abstract level. And then they get messy when you try to apply them in practice.

Hilary Greaves: In both cases you end up having to do thing like write down exactly what you meant the set of all options, well precisely how broad was that supposed to be? And in addition to that you have write down a privileged measure on that set of all options. Well where did you get that measure from?

Robert Wiblin: Yes

Hilary Greaves: There’s no mathematically clean formula for telling me which is the right measure, but it’s going to make a difference, to be an eventual recommendation.

Hilary Greaves: So those are the kind of very messy issues I was alluding to that make things messy when you try to apply it in practice, but you face them on both kinds of approaches. So I think they are basically on a par, as messiness goes.

Robert Wiblin: I know that it’s super early days that GPI is only existed for a year, but are there any other areas where your research caused you to change your mind?

Hilary Greaves: I think it’s just super early days. Remember the time scales of academics tend to work on.

Robert Wiblin: Yeah

Hilary Greaves: You spend a few months fleshing out the vague idea that might in the future constitute a research program. And then you spend a few months working out what’s going to go in the paper. That’s about where we’ve got to, because we’re super early days.

Hilary Greaves: Hopefully, ask us in a year or two’s time. Then I’ll be able to say yeah, I’ve changed my mind on the following practically important issues because of research we’ve done here, but it takes longer than eight months.

Robert Wiblin: Okay. Let’s move on to talking about the Global Priorities Institute itself. And then how people can potentially end up working at GPI or other related organizations. How is effective altruism and global priorities research seen by other academic philosophers and economists these days?

Hilary Greaves: That’s a good question. I think the answer in philosophy is probably quite different from the answer in economics.

Hilary Greaves: In philosophy, the picture is already reasonably positive. A lot of very good moral philosophers are at least sympathetic to the basic ideas of effective altruism. And quite a lot of them, especially younger ones, are devoting significant amounts of their research time. Well they’re either already devoting significant amounts of their research time to research questions motivated by effective altruists type concerns, or they are sympathetic in principle to doing so and they only need relatively little encouragement.

Hilary Greaves: In economics, we’ve found it much harder to find people, certainly people who are already doing that or even people who sound like they are willing to do it any time soon. Perhaps partly because economics seems to be particularly cut-throat in terms of pressures to publish. And the norms within the discipline about what kind of things the journals are willing to publish. So I think in the case of economics, people have to have a much stronger motivation coming from their effective altruist commitment and corresponding willingness to go against the career incentives, in order to work in this space than their counterparts would in philosophy.

Robert Wiblin: Intellectually are they sympathetic? Or do people think that it’s wrong-headed in some way?

Hilary Greaves: I think, in both disciplines, you have the whole spectrum of views. In moral philosophy, you have a lot of people who are extremely pro. And a lot of people who are extremely anti- in some sense. I say anti- in some sense because if you are just saying this would be a good thing to do, then they say that it’s so trivially true that it’s not even worth discussing.

Hilary Greaves: But if you start saying there is some sort of moral obligation to behave in this way, then that puts a lot of backs up.

Hilary Greaves: And similarly in economics, there are quite a few people perhaps mostly the younger crowd, who are very sympathetic. And there are also quite a lot of people who think this is all terribly naive. So, for whatever reason, in both disciplines, it seems to be quite a polarizing topic. If you start mentioning effective altruism you can get people going to each side of the room quite quickly.

Robert Wiblin: Are there really any common misconceptions or objections that you find yourself having to explain again and again?

Hilary Greaves: Yeah, what’s on the list of those? On the moral philosophy side, which is, obviously, the side that I’m more personally more familiar with, I think there are a lot of misconceptions about the connection about effective altruism, either between effective altruism and utilitarianism specifically, or between effective altruism and consequentialism more generally. Where the misconception is that effective altruism is only a thing that you would either be interested in or motivated to pursue if you were utilitarian / if you were a consequentialist.

Hilary Greaves: I think, in reality, the situation is that if you do assume consequentialism in general, or utilitarianism in particular, then there is a particularly simple story you can tell about the status of the effective altruist project. But the view that this is something that only has anything to be said for it against the background of utilitarian philosophy is completely unfounded. All you really need is some component of your moral philosophy which acknowledges that making the world a better place is worthwhile thing to do. And that’s really all you need to get the effect of altruist project off the ground.

Hilary Greaves: Maybe relatedly, another misconception I think some people have in moral philosophy I that effective altruists are strongly committed to very strong claims about the status of effective altruist’s activities as something that is a moral obligation to engage in. Whereas I think in the minds of many people would self-identify as part of the effective altruist movement, they are not really even thinking of moral obligation at all. They are just thinking clearly this is a worthwhile thing to do. This is a thing I want to do. I choose to make this project a central part of my life. Now let’s get on with doing it. Let’s not start talking about whether there are moral laws saying I have to or not.

Hilary Greaves: So those would be a couple of examples.

Robert Wiblin: What has GPI been doing to steer people towards more global priorities’ relevant topics, and how much success have you had so far?

Hilary Greaves: It’s very early days to try and assess how much success we are having about steering other people towards them. I can say a little bit about what our strategy has been, and what kind of things we’ve been working on.

Hilary Greaves: One thing we’d like to do is steer the discussion in moral philosophy away from these questions, which, to our mind is somewhat over-discussed. About, well, is there really a moral obligation to try and maximize good, or is that for some other reason a moral obligation to do effective altruist type of stuff?

Hilary Greaves: And to focus more on what you think of as being the internal effective altruist questions, which are more of the character of “Okay, I buy into the effective altruist world view. I think this a worthwhile thing to be trying to do. I’m interested in devoting a significant part of my resources, whether those resources are time or intellectual energy or money or whatever, to try and make the world a better place. What non-trivial questions do I then face, that tools from philosophy might be able to help with? And what are the answers to those questions?”

Hilary Greaves: So what we’ve been doing so far is engaging in research in that kind of space, assuming that effective altruism is interesting thing to do, and thinking through how it actually goes. And not even really talking about this is the thing that everybody should feel morally obliged to do or not.

Robert Wiblin: Yeah

Hilary Greaves: So trying to open up a new field of research by researching on the questions that seem to ask under research and important. And then hoping in due course other people will join in with that.

Robert Wiblin: It’s seems like part of GPI’s theory of change is that there will be a lot of value generated if we can take ideas that we think we kind of know and understand, but have never really written up properly, and try to formalize them and write them up in a proper papers and publish them and see if they can withstand all the critiques that people throw at them. How confident are you that it’s worth going through that process? Because I know that some people are somewhat skeptical about the value of publishing ideas that we’ve already had because the process just takes so long and it slows you down from having new ideas?

Hilary Greaves: Sure, yeah I think that’s a good concern to have. Maybe I should say papers of that character only one-half of what GPI is trying to do. Not necessarily one-half in quantitative terms, I don’t know what proportion of our papers will end up having that character versus the other character. But I’d like to flag the other character anyway. The other character is, we also think there are a lot of questions where we are really, genuinely quite radically quite unsure what the answer is, and we want to try and found out the answer via this process of careful discussion.

Hilary Greaves: But, anyway, going back to your question about the first type of paper, what’s the value of that? That type of paper. One thing is that, at the moment, it’s true that a lot of university students, including extremely smart university students come across effective altruist ideas one way or another. But it’s interesting that they mostly come across it via things like websites or discussions in the pub or student societies that they happen to join. They generally don’t come across it at the moment via their academic studies.

Hilary Greaves: Why is that? Well part of the reason for that is that when university lecturers are designing academic syllabuses, they do so on the basis of what there is decent academic literature on. And, at the moment, there really isn’t an academic literature properly writing up and investigating these questions by the lights of effective altruism are extremely important.

Hilary Greaves: Well now you might think that doesn’t really matter if the smartest students are coming across the ideas anyway, by some other means. But I think the reality is, firstly, only a relatively small proportion of the students are coming across them. Compared to what could actually be achieved if these things are being taught in undergraduate courses. And secondly, of those who do come across the ideas, a significant portion of them reject the idea whereas we think, since we think the ideas are good ones, we think that if there was a more rigorous treatment of the ideas, that more people would be won over to what, according to us, is the correct way of thinking about things.

Hilary Greaves: So it’s kind of a strategy of getting effective altruist ideas to be taken more seriously and acted upon more widely and, for example, via bringing it about that the next generation of world leaders comes across these ideas and in a way that makes them more likely to take the seriously during their university studies.

Robert Wiblin: What’s your sense of how influential academia is in general? I know that quite a lot of people think that, at least in the UK, academics at some of the top universities have a really large influence over government decision making. Like a surprising amount of influence. Does that fit in with your experience?

Hilary Greaves: Yeah, I think that’s particularly true in economics. More so in economics than in philosophy, which is one of the reasons why GPI is particularly interested in building up the economic side of the literature on EA-relevant ideas.

Hilary Greaves: In general, there are these two routes for academics to have influence from. One is the direct route, so if the government is considering doing something, or some other entity is considering doing something, one key part of the process is going to be consulting with the experts. And the experts can mean a lot of things. But one type of thing it often means that, “let’s ask the academics at the leading universities or are working on things related to this. Both what their own views are and what their literature says about this thing.”

Hilary Greaves: And then there’s the indirect route I mentioned earlier, sort of influencing the influencers. On a longer time scale, academics have influenced because, academics have a captive audience in the brightest young minds of the next generation.

Robert Wiblin: What’s the longer term vision for global priorities research in academia? If things go really well, over the next 10, 20, or 30 years, how do you think they will look?

Hilary Greaves: I think they’ll be significantly more attention paid in the academic research space to research questions that are both extremely important by effective altruists lights. For instance, questions that are relevant to cause prioritization decisions, but questions where at the moment there’s really no academic literature on it. So we’d like to see many more journal articles focusing on these things, a much higher proportion of the brain power that exists within academia focused on these extremely important topics. Rather than on the perhaps intellectually cleaner, but less practically important topics, that those focus on at the moment. And then, correspondingly, topics that are of central importance for important large-scale, real world decision making. Those topics being much better represented on undergraduate and graduate university syllabuses than they are at the moment.

Robert Wiblin: A number of people who’ve worked at GPI said that it’s quite different from other academic institutes. Especially in terms of philosophy. It seems like you all work very closely together. Why are you doing things a little bit differently in that respect, and is it paying off?

Hilary Greaves: Yes, I think it is paying off. Why are we doing things a little bit differently? I think there’s a somewhat unhelpful culture in philosophy but probably in academia more generally where to caricature it somewhat, what people are centrally trying to do is look really smart and impressive. That’s kind of what the career incentives drive them towards.

Hilary Greaves: If you’re in that mindset you’re, for example, particularly concerned that this great new idea ends up being your paper whereas, in the mindset we’re trying to develop and conform to here, what we centrally want to happen is that these good ideas and these important questions get investigated and written up. And published in the appropriate academic literature by somebody. Each of us is much less concerned that the person who writes any particular paper happens to be us.

Hilary Greaves: That mindset lends itself quite naturally to a much more collaborative model of working. And what we’ve been doing is, roughly, first having some quite extensive initial group brainstorming ideas where we together – and over the last few months this is been a perhaps a group of four or so of us – we together brainstorm a topic and think about where the literature currently is, what the existing lines of effective altruist lines of thought are, that seem either compelling or interesting and are conspicuously absent from the academic literature, where there might be potentially high value papers that one might try to write.

Hilary Greaves: And then, once we’ve got a list of potential papers of that type, we might identify which member of our team is, for whatever reason, the best placed to write an article on that thing? Where who is best placed is some function of: Who has the most relevant existing expertise? Who has the most available time? Who has the most personal interest and excitement in writing on this thing? Because that’s an important part of being in a position to do successful research.

Hilary Greaves: And then we make some kind of team decision about who’s going to work on what over the next few months.

Hilary Greaves: And then also, just in the process of writing the paper, there will be a lot of more team discussion involving maybe a lot input from someone who isn’t necessarily going to get their name ending up on the paper and nobody really cares, and nobody on our team is really thinking about that.

Hilary Greaves: So I think that all of this is quite atypical and contrasts quite radically with at least what I’ve experienced working in academia outside of GPI.

Robert Wiblin: Part of the reason is you’re trying to be more interdisciplinary, or share ideas between different areas, right?

Hilary Greaves: I don’t think that’s the reason for that collaborate model.

Robert Wiblin: Okay

Hilary Greaves: I think that’s also true, but I’ve done interdisciplinary work in the past. I used to work in philosophy and physics in the past, as it happens. So that’s interdisciplinary, philosophy and physics. I talked to physicists a lot. It was still very much like the ‘lone scholar’, ‘everybody’s trying to look clever’-type model.

Hilary Greaves: It’s more about having a bunch of people that are bought into the EA mindset, and are thinking: the point is not for me to change the world; the point is for the world to get changed. And that’s been really helpful in generating a more productive style.

Robert Wiblin: On the interdisciplinary style, there’s this odd phenomenon where everyone says how great interdisciplinary stuff is, and yet it seems to be very hard to get people to work in that way. It makes me wonder if it’s the case that it’s something that sounds really good, but then it’s actually in practice not that great. That’s why it’s very hard to get people to produce interdisciplinary work. Do you have anything to say about that?

Hilary Greaves: I’m very positive myself about interdisciplinary work. I think it’s easy to do it badly. You do it badly if a person working in discipline A tries to write a paper that interdisciplinary between disciplines A and B, but they don’t really understand discipline B, and do they neither really talk to anyone who does understand discipline B. Then you get a lot of rubbish being written. Unsurprisingly. But I think if you’re willing to be genuinely engage in the other discipline, by which I mean, really get under the skin of how people in the other discipline think and why, then are some really intellectually interesting low-hanging fruit to be found there. Just because so few people have done this thing before.

Hilary Greaves: And it’s understandable few people have done this thing before. Firstly, it’s really quite hard. You have to do that thing of getting under the skin of the other discipline. That takes a lot of time and it involves developing a whole load of new skills, which itself takes a lot of time. And which you didn’t come equipped with from your graduate studies, or your undergraduate program. Or whatever.

Hilary Greaves: And secondly, for whatever reason, it tends a bit less career prestigious. Because let’s say I’m employed by a philosophy department, I start doing some work inter-disciplinary between philosophy and economics. How is this going to be viewed by my philosophy peers? Well often it’s viewed as: “Well this isn’t real philosophy,” or “We don’t understand what you’re saying, and therefore we’re not interested in it and we’re not going to invite you to to give seminars, because we wouldn’t understand what you said anyway”. Or, “We are not in a position to judge how good this stuff is because we don’t understand economics, so we don’t understand your papers; And therefore we are not going to give you points for having produced good stuff because we just don’t know whether you’re doing that thing or not”.

Hilary Greaves: There are all these reasons why it’s often less prestigious. You have to be more intrinsically to do this thing, rather than just chasing the career incentives. To be willing to put in the quite considerable amount of time it takes to do a good job of it.

Robert Wiblin: In terms of being able to recruit people who already have research going on, you guys decided to brand yourselves Global Priorities instead of Effective Altruism. Do you think it would be better maybe to have more separation from effective altruism, because effective altruism is a broader community, it’s potentially of quite young people, it’s also very applied, very engaged in the world? And I wonder whether that is not the most appealing to academics. And that’s part of why I imagine it’s the Global Priorities Institute rather than Effective Altruism Institute.

Robert Wiblin: But perhaps it would be useful to just try to rebrand all of this as global prioritization as a more palatable name and concept for people in academia.

Hilary Greaves: I was thinking we already had, and the only reason I’ve been using the term effective altruism so much is conversation is because is I’m talking to you.

Robert Wiblin: Right, okay. Makes sense.

Hilary Greaves: Well, I was thinking that your listeners would be thinking in those terms. But yes, when we are writing our research agenda, or engaging in academics, we don’t use the term effective altruism. For some of the reasons you mentioned.

Robert Wiblin: Yeah, yeah, yeah, makes sense.

Robert Wiblin: All right, let’s talk now about what specific roles you guys are hiring for at the moment. And what people might do to prepare themselves for other similar roles in the future?

Robert Wiblin: So what positions do you have open? And who are you looking for?

Hilary Greaves: If I’m literally answering the questions what roles we have open at the moment, or will have open very soon, those roles are more in the operations and administrative side. We are currently in the process of replacing our operations director, who’s unfortunately recently resigned, moved onto bigger and better things – 80k.

Robert Wiblin: Just to explain that, that’s Michelle Hutchinson, who recently started as a coach at 80,000 Hours. We’ve poached her, but hopefully we help to replace her as well.

Hilary Greaves: Appreciate it. So that’s the central role that we’re hiring for over the next few months.

Hilary Greaves: Other things that we’re interested in hiring for but don’t literally have adverts out at the moment are research positions. Potentially in philosophy, if we find the right person, but particularly in economics because our current situation is that we have about four full-time researchers in philosophy and a much smaller and part-time team on the economics side. So what we’d really like to do over the next few years is achieve symmetry within the Global Priorities Institute between philosophy and economics. If only for historical reasons, like the institute having been founded by a bunch of philosophers. That we’ve got this initially skewed situation, we do want to be both genuinely interdisciplinary and genuinely to give at least as much weight to economics as we do to philosophy.

Hilary Greaves: So both on the philosophy and economics side, but especially on the economics side. If there are people out there who are thinking they either are already researching the kind of topics that we are interested in, or they would be interested in moving that direction. Then we are extremely interested in engaging with those people. Either as remote collaborators or with a view to a possible hires in the future.

Robert Wiblin: Tell me more about the operations role you’re hiring for. What that person would be working on, and what sort of person would be really suitable to do that well.

Hilary Greaves: Operations roles tend to be a bit of a capsule. One thing that follows from that is that it’s quite hard to answer your question, but another thing that follows from it is, one thing people would want to be happy with if, this role is going to appeal to them is dealing with complexity. You often have quite a few complex balls in the air, that you’re juggling at the moment, so you need to be quite good at managing your own time, keeping track of quite a complex to-do list, prioritizing appropriately, being happy with doing a wide variety of things. There can’t be just one particular type of thing that you want to be doing day in day out. Week in week out. Because the job’s going to involved quite a wide variety of stuff.

Hilary Greaves: That’s one comment. Another comment will be: the job’s very well suited to people who understand the research, so they’ve got a strong background in the academic stuff. But they’ve decide for whatever reason that they are not going to carry on aiming for a career in research but they maybe like the research environment. They like working in that kind of atmosphere, they’re keen to support research activity. Often the people who are strongest and happiest in ops roles fit roughly that kind of profile.

Hilary Greaves: The work often involves things like: communicating the content of the research to other parties, especially non-academic parties. If there’s actually an academic conference, the person giving the presentation will be the person who actually did the research. But if it’s something like a meeting to potential donors, or an interview with the media, or a talk at EA Global, or that kind of thing, then it would often be an ops person giving that talk. So, again, having that profile of not necessarily wanting to be the person doing the research, but enjoying the process of understanding and communicating the research. That’s a very positive trait for somebody in an ops role. There are miscellaneous things that you might find yourself doing – things like organizing the office, communicating with the administrative support staff, who are dealing with budgets and that sort of thing. Having the strategic overview of the institute’s finances, thinking about whether we’re going to need more funding coming from somewhere soon, and if so, where we might try to look for it.

Hilary Greaves: Keeping tabs on office culture, noticing if people are unhappy, thinking about what we might do about that, that kind of thing. So this is a kind of spectrum of stuff that you might find yourself dealing with in an ops role.

Robert Wiblin: What do you think are the most challenging aspects of the role?

Hilary Greaves: At the risk of repeating myself, I think one thing is the complexity of all the enormous number of sometimes small tasks that you find yourself dealing with. Having to be constantly thinking about how to prioritize among this enormous number of things, because there’s always, as in any job, but I think maybe particularly so in ops roles. There’s always more useful things you potentially could do, that you couldn’t possibly do all of them. You need to have the big picture in your mind of which ones are more time urgent, which ones are more important, that they get done at all, and prioritize accordingly.

Robert Wiblin: Do you want to make the pitch for why operations seems to be just so important for academic research institutes? And what particular challenges you face operations wise?

Hilary Greaves: Sure, ops roles are absolutely crucial for the success of academic research institutes because without them the people who are nominally hired to do the research, in the research institutes, end up having just about all of their time sucked up by other stuff. This is what happens to academics who aren’t in research institutes, and I can speak from a position of authority on this, having previously been in that position. The typical situation for academics is research is the thing that you really want to do, research is the thing that you’re good at, research is the thing that you were mostly nominally hired to do, but it’s really hard to actually reserve any time for it. Because there’s so much other stuff that you’re also meant to be doing, and often that other stuff is stuff that you, the academic are not actually particularly good at, so you’re quite inefficient at doing it. Therefore, it takes up far more of your time than it actually should do.

Hilary Greaves: Whereas if you’ve got a division of labor, between the people who wanted to do the academic research and specialize in that on the one hand, and the people who wanted to do ops work and specialize in that on the other hand, then you can have a lot more research happening. The research institute can be vastly more productive in terms of actually generating the stuff that it was supposed to do.

Robert Wiblin: You’re also running this summer research visitor’s program, right?

Hilary Greaves: That’s correct, yes. So current graduate students can apply to GPI to come and visit, during roughly the months of May and June – Oxford’s summer term. We’ve been having roughly 12 graduate students per summer come pretty much all at the same time as one another, so we end up with a cast of thousands, and quite a vibrant atmosphere, particularly for those two months, and lay on a few structured seminar series and so forth to help this cohort of graduate students develop their ideas about what GPI-relevant research topics they might be excited about working on in the future and to get them started on those research projects, under the supervision of the permanent or semi-permanent GPI academic staff.

Robert Wiblin: When can people apply to that? Can they apply for it now?

Hilary Greaves: So, for the summer 2019 program, we’ve had two rounds of applications. The early round has already passed. That has filled all of the philosophy slots, but only half of the economic slots, so for economics graduate students, there’s a second round of applications, if I remember correctly, in November, and they can apply to that. There’s a thing in summer 2019. For philosophy graduate students, maybe it’s worth noting, we plan to continue this visitor program annually, so it’s not just for 2019. If they’re interested in applying for some future round, they can apply in future, or maybe send us an expression of interest in the meantime, and we’ll make sure we keep them on the radar.

Robert Wiblin: Okay, great. I expect that this will come out in October, so we’ll stick up a link to applications for economics, people who are interesting in visiting GPI next year. You also mentioned a bunch of new scholarships that you’re starting next year, what are they about, and who should apply for them?

Hilary Greaves: We’re offering both scholarships and prizes. The prizes are for students who are already enrolled in a DPhil program at Oxford, and that provides top-up funding in association with the student engaging with GPI, and is spending summers perhaps doing some GPI-relevant research. The scholarships are for people who are in the future, applying for DPhils at Oxford. Those will provide top-up funding to extend the duration of the student’s funding package. For example, in philosophy, a situation that DPhil students often find themselves in is that they’re only given two years of DPhil funding by the, say, the arts and humanities research council, or whoever it is that gives them their DPhil funding, and that’s really too short for the vast majority of people to provide a PhD.

Hilary Greaves: Rather than these people being in a situation of thinking, “Well, you know I’d kind of like to go to Oxford, in particular, maybe I’d like to be able to engage with GPI. But Oxford’s only offering me two years of funding and I’ve got an offer of five years of funding from such and such an American university.” The Global Priorities Institute is seeking to top-up in a way that extends the duration of the Oxford funding package. You can come here with that same financial security that you would if you were going to a typical American program.

Robert Wiblin: Do you worry that with those scholarships, you might end up funding a whole lot of people who would’ve done this kind of thing anyway? Have you tried to figure out where you’ll get the biggest bang for buck? Using money to try to draw people into these research questions?

Hilary Greaves: Of course, you know we’re roughly thinking along EA lines, we always think about where we can get the most bang for our buck. I think we have experienced this as a genuine practical concern in the context of several of the extremely promising DPhil students that have been interested in engaging with GPI and, since they’re sensible people, they have been genuinely concerned by this asymmetry of funding, where maybe they’d like to be at Oxford, because they’d like to work with GPI, and other organizations in the EA space that are largely localized in Oxford. But they feel that that just wouldn’t be sensible in financial or strategic terms for them when they’re being offered so much more money, and so much more security from, let’s say for the sake of argument, in Princeton. We’ve been developing this scholarship program precisely in response to these concerns that we’ve come across in practice, in the context of more than one particular individual.

Hilary Greaves: We think this is extremely high value because if we can get these very smart students more engaged, in global priorities type research at this early stage of their career, there’s a very high chance or there’s a much higher chance that they’ll either develop a research career in the longer term, that makes global priorities type questions a central part of it, or that they’ll end up doing something else in the effective altruist space if they go overseas to some institution that has a much weaker EA presence.

Hilary Greaves: Nothing in this space is precisely measurable, we don’t have an impact evaluation, or an RCT or anything like that. But anecdotally and commonsensically, it’s much more likely if that happens that they will end up getting sucked into, or becoming interested in, whatever is fashionable at the institute that they do go to. So we really want to encourage the best people who are at the moment interested in engaging with GPI to come here and do it.

Robert Wiblin: For the thousands of listeners, no doubt, who are planning to start PhDs at Oxford, we’ll stick up links to those scholarships, and perhaps I’ll add a pitch at the end once about all of the things that are on offer whenever this episode does go out.

Hilary Greaves: Please do also stick up the links to the visitor programs. That’s one of our central mechanisms and we realize that, of the extremely smart graduate students out there, an extremely small proportion of them are at Oxford, and we are also extremely interested in engaging with those who aren’t at Oxford. Please also include the links to the visitor program and the other mechanisms by which people in those categories can come and engage with us.

Robert Wiblin: For people who are yet to do a PhD, but are thinking of doing one in future, you have the option of working in potentially quite a lot of different fields, or trying to specialize in different fields, which do you think are the very most promising. And are there any particular sub-fields within those that you think have particularly interesting research questions that would be relevant to GPI?

Hilary Greaves: I think philosophy and economics are the most obvious candidates. Economics I think especially so, if somebody’s got the right skill set and the right kind of intellectual bent both to want to go into economics and to succeed at economics, if they’re interested in doing so and then choosing their research topics in a way that’s driven by EA-type concerns, that’s an extremely high value thing to do. And there’s nowhere near enough people doing that kind of thing. This well plays just because I’m a philosopher just to answer the question about what sub-field within economics is going to be the highest value, I think I’ll try and avoid talking rubbish by not saying anything about that. I can also answer the same question within philosophy. The central subfields of philosophy are moral philosophy and decision theory, relatedly formal epistemology, those seem to be the ones that have really been key to the discussions that we’ve been having so far.

Robert Wiblin: If someone doesn’t have a PhD, is there any value in them applying already, or should they go and get a PhD in economics or philosophy and then come back to it in a few years?

Hilary Greaves: For full-time hires, we are looking at people who already have PhD’s, but we do also have engagement mechanisms for people who are current or future graduate students. Particularly in either philosophy or economics. One thing that we have advertised for existing graduate students, whether they are at Oxford or elsewhere, is a sort of support package where the Global Priorities Institute would provide a top-up for the student’s existing funding package, in return for that student engaging meaningfully with GPI while they’re pursuing their doctoral research.

Hilary Greaves: Another thing is for people who are, in the near future, considering applying to graduate school, in philosophy and economics, we have some scholarship programs where, if the institution they are applying to would provide, say, three years of funding, then they can apply to the Global Priorities Institute for a top-up package that would extend that to a four or five years of funding, again in return for engaging with GPI by for example, visiting us in the summers during their doctoral studies.

Robert Wiblin: Are there any other appropriate ways in? Would you possibly be interested in hiring physicists or engineers or political scientists? Are there any other areas that people might have a chance in?

Hilary Greaves: Yeah, I’m sure there are. I don’t have a very good strategic picture at the moment of how might you attempt to rank ordering of disciplines in terms of things that are good to go into if you want to have a positive impact via something like a GPI-like strategy. I know a lot of people have been thinking along similar strategic lines in psychology, so that’s another obvious one, where there’s some kind of proof of concept already existing. In the longer term, GPI does seek to broaden out to include other disciplines, and work out which bits of other fields are particularly relevant here in the same way that we’ve been currently trying to work out which bits of, say, economics are relevant.

Robert Wiblin: This is a question that I’ve asked a lot of people on the show, at various different points, but it seems to divide people and it’s hard to get … Just asking one person isn’t really enough. When you’re trying to decide what PhD supervisor to take, or what PhD topic to take, or what to work on perhaps at a postdoc, but before you’ve got tenure, or job security within academia, how do you think people should trade off working on something that they think is really valuable and important, versus something that’s gonna advance their career and make them likely to get a permanent position?

Hilary Greaves: I think it depends on what their longer-term game plan is. I think if you’re dead-set on the academic career path, then during the PhD it’s probably better to largely prioritize career prestige; maybe keep your fingers in some more directly impact motivated research pies, at the same time. But I would advise those people against say, doing something very applied, which is going to be seen by the academic discipline as not intellectually very demanding, that kind of thing. If their long term strategy is to have impact via first getting a respectable position in academia, then I think that would be my advice.

Hilary Greaves: There’s a completely different category of people who also have a lot of value to contribute. Which is people who are for the moment, their next step is to do a PhD, but their long term plan is not to stay in academia. I think for those people the picture is very different.

Robert Wiblin: How does it look for them?

Hilary Greaves: I think there’s a much stronger case for those people to just disregard the academic career incentives, because if you’re not aiming for an academic career, those incentives are irrelevant anyway. Do stuff that interfaces much more with say, industry or politics or GiveWell’s concerns or whatever.

Robert Wiblin: When you’re hiring, how important is it that people are intrinsically motivated to answer these questions because they think they are morally important? I’m thinking of could you pay people a whole lot of money to work on your research agenda, even if they don’t think it’s especially important to them?

Hilary Greaves: Definitely. In fact, I think we have at least one researcher, I won’t go around naming names, but I think we have at least one researcher who approximately fits the profile you just mentioned. What’s critical to us is that people are genuinely willing to make the decisions about what to work on, based on importance, and then put importance by the lights of this very impartial, perhaps long-termist world view, rather than slavishly following career incentives, or just having a bee in their bonnet about some other thing, and then just wanting any old academic position so that they can carry on doing this other thing. We don’t want to be spending our money on employing people like that who are not actually going to guide their research by GPI strategic lights.

Hilary Greaves: But as longs as, for whatever reason, perhaps merely that we’re paying them, they’re willing to genuinely guide their research by GPI lights. If we’re confident that they’re going to do that thing, then we don’t really care what the motivations are. In practice, unsurprisingly, it does tend to be people who are intrinsically motivated who mostly fit this profile. But as I said there are already exceptions that we’ve come across, and that we’re quite happy with.

Robert Wiblin: So a lot of people are drawn to doing careers in global priorities research, I think people find it quite an appealing career prospect. Which suggests I think that quite a few of them aren’t going to make it, because there just aren’t that many positions in fact. Maybe it will increase in future, but I don’t think it will increase in proportion to the amount of interest that there is. How might people be able to figure out whether they are likely to be able to get positions? Whether they are really cut out to have a career in the field? Are there any kinds of people who should say “Although this is interesting to me, I should do something else, because I’m just not likely to get a job.” Either in academia or outside of academia. I’m trying to think, are there any kind of signals that you can get that you should just give up. That you should decide, “Oh, I’m not gonna become a researcher. I should do something else.”

Hilary Greaves: Yes, on various time scales, I think there are signals along those lines. I know of several people who have decided against a research career, maybe partway through their PhD because they find for one reason or another, they don’t enjoy that kind of work. They think, trying to project that forwards, they think they’d find that extremely difficult. If they find it extremely difficult to stick at it to the extent that is required to succeed and even if they could make themselves do it, they just wouldn’t be happy doing that kind of work.

Hilary Greaves: For example, one trait that often leads people to fit this profile is if people find that the thing that they find motivating and satisfying is having a relatively quick turnover of projects that they’re doing, and then objectives achieved. Like if you want to be able to say, “I’ve achieved my objective on a timescale of weeks, rather than multiple months,” then research is probably a bad field to be in because the timescale of significant progress in research tends to be more like a year than a week. If you think of how long it takes for the genesis of an idea of a paper to getting anywhere near that paper submitted for publication, even in philosophy where you don’t have to do any experiments, so maybe compared to some other disciplines the time just goes relatively quick.

Hilary Greaves: At least in my experience, it’s typically on the order of a year, and some people can find that just very demoralizing. Another thing that graduate students are very often concerned about is just quite aside from issues of their motivation and how much they enjoy this kind of work, whether they will be capable of producing research that is high enough quality to succeed in the field. I think that’s a much harder one to judge at an early stage. Sometimes there’s really no substitute for trying it and seeing what happens. A pretty bad idea is to consult your own feelings about how good you are at your subject. I say that’s a bad idea because it’s often more attracts things like temperament and confidence. You could be somebody who’s temperamentally inclined to judge yourself quite negatively, but actually by the lights of everybody else, you’re really good in your field, or indeed vise-versa. There’s a lot of noise in that kind of signal.

Hilary Greaves: If you can do things like solicit feedback from a significant number of mentors – say people who are teaching you at graduate seminars – solicit feedback from them early on, on how promising you seem to them as a potential future scholar, or better still, stick it out for the duration of a graduate program and try to get some things published in good journals during that time to see whether you manage it. By that point you’ve got a pretty reliable signal of what your chances are of success in the field. At least in philosophy, the very academically best students have typically succeeded in either publishing or being within reaching distance of publishing something in one of the best handful of journals by the end of their PhD. So on that kind of timescale you can get some kind of useful feedback about whether this looks like it’s likely to work out in the longer term.

Robert Wiblin: Let’s move on to talking about money. Does GPI need further donations? What might you do if someone donated a couple of million pounds to you?

Hilary Greaves: I think the key thing we’d do over the next few years is we would be in a much stronger position to build up the economics side. We’ve got funds in our budget to cover the hires that we’ve already made, but we don’t have spare at the moment to make additional hires. And we do currently have this very skewed situation, where we’ve got our philosophy research … We’ve got a respectable-sized seed research team in philosophy, but nothing in economics. The challenges in hiring for economics are twofold. One is finding the… well, maybe threefold. Part 1: finding the right people to hire. Part 2: convincing them that they want to come and work with us. But then we also need part 3: having the money in the bank to pay them once we’ve done 1 and 2.

Hilary Greaves: As we hopefully get more and more successful with 1 and 2, then we will crucially need 3, otherwise it won’t happen.

Robert Wiblin: As I recall, Oxford requires you to have enough money to pay someone through the end of their contract in order to be able to hire them in the first place. Is that right?

Hilary Greaves: I think that is correct.

Robert Wiblin: Which means that you have to stockpile potentially quite a lot of reserves to get moving.

Hilary Greaves: Yeah. You have to have a signature on the dotted line.

Robert Wiblin: From an EA-focused donor’s perspective, what do you think is the strongest argument for giving to GPI and perhaps the biggest reservation that they might have?

Hilary Greaves: I think the strongest argument for giving to GPI is going to be the argument, the discussion that you have around the long term vision for GPI. If you’re sold on our view that it’s going to be extremely valuable in the long run if EA ideas become both better represented within academia, and it’s the case that a higher proportion of the enormous amount of brainpower that already exists within academia can be harnessed onto these extremely practically important and currently somewhat under-researched research questions. If you’re sold on that vision then there’s also the case for funding it.

Hilary Greaves: I think the other mind set you might have that would incline you against funding GPI might be something like, “We already pretty much know how these things go, there are already lots of very smart people in the effective altruist community. You’ve kind of worked out what the big picture is. Now the remaining interesting question is just the messy empirical ones like which potential new technologies, or actual new technologies, pose the biggest extinction threats and what can we in practice do to mitigate them?” Those kind of object level questions are not ones that you’re going to get additional traction on by the kind of relatively intellectually tidy academic research that GPI specifically is engaged in. So if you think that the big strategic picture, maybe we should be focusing on the long term, we should be thinking about an extinction threats, we just want to work out which are the biggest object level risks and how to mitigate them. Then GPI is not the right institution for you to fund.I could tell you which one is, but it’s not us.

Robert Wiblin: Which one would it be?

Hilary Greaves: The Future of Humanity Institute. I would say that because they’re just across the hall from us. But they’re the institute, at least, that I have the greatest familiarity with, but there’s more bypassing the traditional academic group. They’re not really trying to place research articles in academically prestigious journals. They’re not trying to write out the background theory for why you should care about this stuff. They’re just getting on with figuring out via whatever perhaps messy means are necessary. What in practice we should do, and then reaching out directly to policy people, governments, corporations and so forth. Rather than going by the academic literature. I think that’s also a thing that’s definitely worth doing, it’s just we’ve got this division of labor, so if you think that one of them and not the other is valuable, then from the one you think is valuable. Personally, I think they’re both extremely valuable.

Robert Wiblin: Yeah, that makes sense. If someone was potentially interested in donating to Global Priorities Institute, who can they drop an email to?

Hilary Greaves: There is an email address on our website for that. I think it’s [email protected] (also possible to donate here.

Robert Wiblin: Okay great. Let’s talk just quickly a last question on careers. I imagine that there’s quite a lot of people who are interested in moving somewhat towards global priorities questions with a PhD, or any study that they might do, but they’re not able for whatever reason to come to Oxford, which would be the natural choice if they were really keen to work at GPI. Is there anywhere a list of academics or relevant research institutes that may be not the perfect place to study, but that would take people in a good direction, towards related questions? And is that perhaps something that 80,000 hours should put together?

Hilary Greaves: I don’t know of such a list existing already. We are very interested in supporting GPI-type research done at other institutions. If somebody’s in that position I definitely encourage them to get in touch and we could see if we’re in any position to help.

Robert Wiblin: Are there enough people doing related research at other universities that it would be worth putting together a list of potential supervisors that people could work with? Or is it just too niche?

Hilary Greaves: That’s a good question. It’s one of the things that we’re trying to figure out. To what extent are other people already doing stuff that’s relevant to maybe GPI’s research agenda for the sake of concreteness, versus to what extent is this going to be a project of attracting more people to an academic subfield that doesn’t yet exist. I think the model we’ve been working on is the second one. There are things that are already being done that are kind of relevant, but there’s not really anything like an existing body of people directly focusing on research topics, guided by the kind of vision that we’re working with.

Robert Wiblin: You’ve got the scholarships, and the summer visitor program and this Head of Research Operations, they can see. When do you think you’ll next advertise an academic position that people could apply to?

Hilary Greaves: That is a good question. It depends on a number of unknowns. One is fundraising, and another one is we try to advertise positions when we feel reasonably confident that advertising positions would attract people who’d be a good fit. I think maybe the thing I should say, that is not directly answering your question but is relevant is, if somebody is asking that question, what they should do is contact us and register their interest. Because if we knew that there is somebody out there who would apply to a position if we advertised it and we’d be interested in hiring that person, then we’re much more likely to go to our fundraisers and sayers, and say, “Look, we think there’s a really good chance we could hire someone who’s great, and a really good fit, if only we had the funds. Can you help us out here?” Then it’s much more likely the positions will start opening.

Robert Wiblin: If someone was considering working at GPI, what would you guess might be the second or third best options for them? What would be nearby alternatives where they could have a really large impact?

Hilary Greaves: One very good nearby job is get an academic job anywhere, once you’ve got tenure, you’re free to work on whatever you like. You can work on GPI themes, you have to go through that process of getting tenure, and maybe you have to jump through the career incentive hoops for a few years to get there. But if you’re playing the long game, that’s definitely a thing that you can do. The other obvious thing is you can do very similar kinds of research at EA-type orgs that are not part of academia. You can go and work for OpenPhil, for example.

Robert Wiblin: What do you find most frustrating about academia and perhaps also what do you find most appealing about it?

Hilary Greaves: I could answer that question with respect to the previous version of my job, when I had a relatively standard academic job, definitely the thing that was most frustrating was having a lot of my time taken up by dealing with meetings about things that I wasn’t interested in and bureaucratic structures that didn’t seem very valuable. You can feel like there’s this burning thing that you want to do, but then 90% of your time is being sucked up by things that seem like rubbish. I’m now in a very privileged position where I don’t have that. It definitely is part of the standard academic profile. Maybe more so in the UK than in the US. It depends somewhat on where you’re planning on going.

Hilary Greaves: The positives about working in academia fall into two main categories. One is the job is just really, really intellectually interesting. I’ve done a lot of jobs over the last 24 years since I was 16 and started scanning things in the supermarket. A lot of jobs, to a greater or lesser extent are pretty mind-numbing. You just really don’t get that in academia. Academics often moan about the fact that they have to spend more of their time doing admin than they want to. But this is really like griping at the minor things that spoil the otherwise very perfect balloon. You do get to think about extremely interesting stuff and you actually get paid to spend time reading stuff that’s been written by the world’s best minds on your subject, and talking to these people, and thinking about it yourself. This would be like if I had a job scanning things in the supermarket, it would be the kind of thing I’d want to do in my evening but I’d be too tired to do it. I’m in the extremely privileged position of actually getting paid to do it, and that’s just amazing.

Hilary Greaves: The other thing that I find very positive about working in academia is the flexibility, both in terms of work-life balance, and in terms of being your own boss. I really feel I get to be very self-directed, to choose my own projects, based on what I’m actually motivated to do, whereas I think in just about any, well, in the vast majority of other jobs, there’d be a lot more frustration because somebody else or something else is telling you what to do and it’s not the thing you want to do or you don’t approve of the way you’re being told to do it. I think you escape a lot of those very normal job frustrations to a very high degree by being in academia. Also, if people have a family or they’re thinking of having a family, I think it’s much easier to juggle that and academia actually, than it is in the vast majority of other jobs, because you really do get to choose your own hours, both in terms of how many hours you work, and in terms of which hours you work.

Hilary Greaves: I’m now a parent of four children, and I’ve found that extremely valuable for being able to do all the things in my life that I want to do.

Robert Wiblin: Fantastic. My guest today has been Hilary Greaves. Thanks for coming on the show, Hilary.

Hilary Greaves: Thanks for having me.

Robert Wiblin: If you were into this episode, you might want to listen to Hilary’s colleagues in episode 17 – Prof Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster, and episode 16 – Dr Hutchinson on global priorities research & shaping the ideas of intellectuals.

Also just a reminder that GPI is hiring for a Head of Research Operations and looking for academic visitors, postdoctoral students and summer research visitors. If your work is close to theirs, or you want to move it in that direction, don’t be shy about going to their opportunities page or getting in touch by emailing them directly.

The 80,000 Hours Podcast is produced by Keiran Harris.

Thanks for joining, talk to you in a week or two.

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs and policymakers - to analyse the case for working on different issues, and provide concrete ways to help.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected]

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.