Enjoyed the episode? Want to listen later? Subscribe by searching 80,000 Hours wherever you get your podcasts, or click one of the buttons below:

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that illegitimate children should receive fewer legal protections, and that there was a ranking in the moral worth of different races.

Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?

If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.

Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics:

  • How would we go about a ‘long reflection’ to fix our moral errors?
  • Will’s forthcoming book on how one should reason and act if you don’t know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’?
  • If we basically solve existential risks, what does humanity do next?
  • What are some of Will’s most unusual philosophical positions?
  • What are the best arguments for and against utilitarianism?
  • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field?
  • What are some the biases we should be aware of within academia?
  • What are some of the downsides of becoming a professor?
  • What are the merits of becoming a philosopher?
  • How does the media image of EA differ to the actual goals of the community?
  • What kinds of things would you like to see the EA community do differently?
  • How much should we explore potentially controversial ideas?
  • How focused should we be on diversity?
  • What are the best arguments against effective altruism?

Keiran Harris helped produce today’s episode.

Key points

We make decisions under empirical uncertainty all the time. And there’s been decades of research on how you ought to make those decisions. The standard view is to use expected utility reasoning or expected value reasoning, which is where you look at the probability of different outcomes and the value that would obtain, given those outcomes, all dependent on which action you choose, then you take the sum product and you choose the action with the highest expected value. That sounds all kind of abstract and mathematical, but the core idea is very simple. If I give you a beer, and you think 99% likely that beer is going to be delicious, give you a little bit happiness. But, there’s a 1 in 100 chance that it will kill you because I’ve poisoned it. Then it would seem like it’s irrational for you to drink the beer. Because even though there’s a 99% chance of a slightly good outcome, there’s a 1 in 100 chance of an extremely bad outcome. In fact, that outcome’s 100 times worse than the pleasure of the beer is good. At least.

In which case the action with greater expected value is to not drink the beer. We think about this under empirical uncertainty all the time. We look at both the probability of different outcomes and how good or bad those outcomes would be. But then when you look at people’s moral reasoning, it seems like very often people reason in a very different way.

When you look at scientific theories, you decide whether they’re good or not [largely] by the predictions they made. We’ve got a much smaller sample size, but you can do it to some extent with moral theories as well. For example, we can look at the predictions, the bold claims that were going against common sense at the time, that Bentham and Mill made. Compare it to the predictions, bold moral claims, that Kant made.

When you look at Bentham and Mill they were extremely progressive. They campaigned and argued for women’s right to vote and the importance of women getting a good education. They were very positive on sexual liberal attitudes. In fact, some of Bentham’s writings on the topic were so controversial that they weren’t even published 200 years later.

Different people have different sets of values. They might have very different views for what an optimal future looks like. What we really want ideally is a convergent goal between different sorts of values so that we can all say, “Look, this is the thing that we’re all getting behind that we’re trying to ensure that humanity…” Kind of like this is the purpose of civilization. The issue, if you think about purpose of civilization, is just so much disagreement. Maybe there’s something we can aim for that all sorts of different value systems will agree is good. Then, that means we can really get coordination in aiming for that.

I think there is an answer. I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.

Transcript

Rob Wiblin: Hey listeners, this is the 80,000 Hours podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.

We’re back from Christmas and New Year’s break with Prof Will MacAskill. Inasmuch as effective altruism has a founder, Will’s the guy. You may well have seen interviews with him already, but we found almost entirely new things to talk about here.

If you’d like to learn more about pursuing careers in effective altruist organisations, academia, philosophy, or global priorities research, we’ve got links to our guides in the show notes.

And if you enjoy the episode, why not recommend the show to a friend – that’s how most people discover new podcasts and I assume they’ll be forever grateful.

Without further ado, I bring you Will MacAskill.

Robert Wiblin: Today, I’m speaking with Professor Will MacAskill. Will will be well known to many people as a co-founder of the effective altruism community. He co-founded Giving What We Can, 80,000 Hours, and The Center for Effective Altruism, and remains a trustee of those organizations today. He did his undergrad in philosophy at Cambridge, and his Ph.D. in moral philosophy at Oxford. He then became the youngest associate professor of philosophy in the world at 28, again at Oxford University. He’s also the author of “Doing Good Better: How Effective Altruism Can Help You Make A Difference.” Thanks for coming on the podcast, Will.

Will MacAskill: Thanks so much for having me. I’m a really big fan of the 80,000 hours podcast, and that’s why I’m really excited to have this conversation.

Robert Wiblin: We’re going to dive into your philosophical views, how you’d like to see effective altruism change, life as an academic, and what you’re researching now. First, how did effective altruism get started in the first place?

Will MacAskill: Effective altruism as a community is really the confluence of 3 different movements. One was Give Well, co-founded by Elie Hassenfeld and Holden Karnofsky. Second was Less Wrong, primarily based in the bay area. The third is the co-founding of Giving What We Can by myself and Toby Ord. Where Giving What We Can was encouraging people to give at least 10% of their income to whatever charities were most effective. Back then we also had a set of recommended charities which were Toby and I’s best guesses about what are the organizations that can have the biggest possible impact with a given amount of money. My path into it was really by being inspired by Peter Singer and finding compelling his arguments that we in rich countries have a duty to give most of our income if we can to those organizations that will do the most good.

Since then, effective altruism really took off. I was very surprised. I was thinking of Giving What We Can as a side project by 2 not terribly organizationally competent philosophers, yet these ideas really seemed to resonate with people. That meant that local groups were set up all around the world. There’s now about 100 local groups dedicated to the ideas of effective altruism. We started to apply the idea of how do you do the most good to other areas, such as 80,000 hours which was launched in early 2011. We started to think about different cause areas as well. Not just global poverty, but how can you do the most good in general? Maybe that’s reducing existential risks. Maybe that’s improving lives of farm animals. Many different areas. It’s been wonderful to see the confluence between these different groups where now there’s this vibrant community of thousands of people all around the world of people who have really answered and are answering the question of how can I do as much good with at least a significant part of my resources.

Robert Wiblin: There’s a lot more we could say about the story of effective altruism over the last 5 or 6 years. I think you’ve given some presentations about that about EA global recently. We’ll stick up a link to those rather than just going over that again. Is that right?

Will MacAskill: I think many people will be bored of hearing about me and Toby in a graveyard in 2009 at this point.

Robert Wiblin: All right. We’ll try to say something new. To bring us up to the present day, for until a couple of months ago you were CEO of the Center for Effective Altruism. Is that right? About a month ago you stepped away from that role.

Will MacAskill: That’s exactly right. I became CEO of the Center for Effective Altruism. Even though I was a co-founder of it many years ago, I came back as CEO a year and a half ago. That was all it was intended to be an interim thing where CEA was undergoing some changes. It was consolidating many different projects and trying to update its mission in accordance with amazing developments we’ve seen in the growth of the effective altruism community.

It was pretty clear that being CEO of this organization was not my comparative advantage. It was an absolute pleasure to be able to hand over that position to Tara MacAulay, who is absolutely fantastic in this position. She’s one of the most productive, competent, sensible, smart people that I’ve ever met. I’m extremely confident the CEA is going to strive under her leadership.

Robert Wiblin: Running CEA wasn’t your comparative advantage. What is? You’re being an actual academic now?

Will MacAskill: Hopefully that’s my comparative advantage. Certainly it seems like I’m best positioned to focus on EA as an idea, whereas Tara and the Center for Effective Altruism is focused on EA as a community. With respect to EA as an idea, my big projects at the moment are: one is I’m finishing up a book on moral uncertainty that I’m co-authoring with Toby Ord and Krister Bykvist. Second is, I’m helping to set up the Global Priorities Institute, which is the first academic institute devoted to addressing theoretical questions that are raised by effective altruism. That’s based in Oxford and is led by Hilary Greaves and Michelle Hutchinson.

Then, I’m also doing a little bit of work on academic research into the fundamental questions related to effective altruism. I’m also doing a little bit of teaching which is part of my role as an academic. In particular I’m giving a course on utilitarianism as part of the Oxford undergraduate course.

Robert Wiblin: I’ve got another episode coming out with Michelle Hutchinson about the Global Priorities Institute, so maybe let’s pass on that one. Tell me more about this book on moral uncertainty. I’ve been waiting for this for a while because I actually need to know how am I going to make decisions given that I’m not sure about what moral theory is correct. What kind of theory are you putting forward in this book?

Will MacAskill: Terrific. Introducing the core idea is that we make decisions about under empirical uncertainty all the time. There’s been decades of research on how you ought to make those decisions. The standard view is to use expected utility reasoning or expected value reasoning, which is where you look at the probability of different outcomes and the value it would obtain. Given those outcomes, all dependent on which action you choose, then you take the sum product and you choose the action with the highest expected value. That sounds all kind of abstract and mathematical, but the core idea is very simple where if I give you a beer you think 99% likely that beer is going to be delicious, give you a little bit happiness. There’s a 1 in 100 chance that it will kill you because I’ve poisoned it. Then it would seem like it’s irrational for you to drink the beer. Even though there’s a 99% chance of a slightly good outcome, there’s a 1 in 100 chance of an extremely bad outcome. In fact, that outcome’s 100 times worse than the pleasure of the beer is good.

Robert Wiblin: Probably more than 100 times. At least.

Will MacAskill: At least, yeah. That’s all you need. In which case the action with greater expected value is to not drink the beer. We think about this under empirical uncertainty all the time. We look at both the probability of different outcomes and how good or bad those outcomes would be. Then, when you look at people’s moral reasoning, it seems like very often people reason in a very different way. I call this the football fan model of decision making given model uncertainty. People say, “I’m a libertarian, or I’m a utilitarian, or I’m a contractualist.” At least, moral philosophers speak this way. Then they just say, “Well, given that, this is what I think I ought to do.” They’re no longer thinking about uncertainty about what matters morally. Instead they’re just picking their favorite view and then acting on that assumption. That seems irrational given all we’ve learned about how to make decisions under empirical uncertainty. The question I address is: supposing we really do want to take moral uncertainty under account, how should we do that?

In particular, it seems like given the obvious analogy with decision making under empirical uncertainty, we should do something like expected value reasoning where we look at a probability that we assign to all sorts of different moral views, and then we look at how good or bad would this action be under all of those different moral views. Then, we take the best compromise among them, which seem to be given by the expected value under those different moral views.

Robert Wiblin: Why don’t you think people haven’t always been taking this approach. How can this be a new idea, really?

Will MacAskill: I think it’s a surprising fact that this is such a new idea in moral philosophy. There is a tradition of Catholic theologians working on this topic. This was actually a big issue for them, in particular when you had different moral authorities. That is different priests who disagreed. There were actually a variety of views staked out about how you ought to resolve that disagreement. In general, it is remarkably neglected. I think there’s potentially a couple of reasons for this.

One is, we are actually just not very good at reasoning probabilistically in general. There’s a lot of psychological literature on that topic. Secondly, we’re particularly bad at reasoning probabilistically if you don’t get feedback. Because morality, if it’s true, moral truths are necessarily true a priori, it’s not the case that you’re getting any empirical feedback if you’re not acting in this way. It’s just in general very easy, I think, to make deep moral mistakes because the only way you can find out if you’re right or wrong is by moral reflection.

Robert Wiblin: I’m curious to know what the Catholics did. I guess they had an issue where one priest thought it was wrong to eat fish on a Friday, and another one said actually it was fine. I suppose they could say, “Well, you should just be extremely cautious and never do anything that might be wrong.” That would be one approach.

Will MacAskill: Yeah. There was actually a number of different approaches. One that was saying if it’s probably the case that this action is permissible then you can act on it. It’s permissible to do so. Another view was, well, if it might be the case that it’s impermissible then you shouldn’t. Another view is actually quite similar to the expected value view, where you want to take into account both disagreement and the severity of the sin that you would commit. Interestingly, they actually seemed to abandon that view for reasons that are really troubling moral philosophers today who are engaging with the issue, which is this problem of inter-theoretic value comparisons.

Robert Wiblin: I think I said it was prohibited to eat fish on a Friday. I think I might have gotten that the wrong way around. Maybe you can tell I’m not a Catholic.

Will MacAskill: Yeah.

Robert Wiblin: It sounds like, if you take the expected value approach, then things might be quite straightforward. You give different probabilities to different moral theories. You look at how good or bad different actions are given those moral theories. Is there even that much more to say about this? It seems kind of easy.

Will MacAskill: If only that were true. I think there’s not just one book’s worth of content in terms of addressing some of the problems that this raises. I think it’s probably many volumes. First, there’s the question how do you even assign credences to different moral views. That’s already an incredibly difficult question. It’s not the question I address because that’s just the question of moral philosophy, really.

Secondly, supposing you do have a set of credences across different moral views, how do you act? Here are a few problems that you start to face. The first is, how do you make comparisons across different moral viewpoints. Let’s say, again this is hypothetical thought experiment of if you can kill 1 person to save 5 people. The consequentialist view would say, “Yes, you ought to do that. It’s very wrong if you don’t do that because 4 lives on net would be lost.” The non-consequentialist view would say, “No, it’s extremely wrong to do that because you’re killing someone, and it’s extremely bad to kill someone, even if that would produce better consequences.” Now the question is, for whom is there more at stake? Is the consequentialist saying there’s more at stake here? Is the non-consequentialist saying there’s more at stake here? How do we kind of address that question?

That’s the first set of issues. Second, even deeper, is maybe some moral views don’t even give you magnitudes of wrongness, or magnitudes of value. Maybe it’s the case that a non-consequentialist says, “Yes, it’s morally right to not kill the 1 person to save 5, and it’s wrong to kill 1 person to save 5, but there’s not meaning to say it’s much more wrong to do that than it is to do some other thing.” Maybe they just give you a ranking of options… in terms of choice worthiness.

A third problem is the fanaticism problem, which is the worry that perhaps under moral uncertainty, what you ought to do is determined by really small credences in infinite amounts of value or huge amounts of value. Where perhaps absolutist views say that it’s absolutely wrong to tell a lie, no matter how great the consequences. Perhaps the way you’d want to represent that is by saying that telling a lie is of infinite wrongness, whereas saving lives or something isn’t. Then, you’ve got this decision, “I can tell a lie, but I’ll save 100 lives by doing so.” Let’s suppose you have a 1 in a million credence that the absolutist view is correct. Well, one in a million multiplied by infinite wrongness is still infinitely wrong, so telling the lie would still have lower expected choice worthiness than not telling the lie.

That just seems crazy. That seems like what we ought to do is just dominated by these fringe fanatical seeming views.

Robert Wiblin: I guess it could get even worse than that because you could have one view that says something is absolutely prohibited, and another one that says that the same thing is absolutely mandatory. Then you’ve got a completely undefined value for it.

Will MacAskill: Absolutely. That’s the correct way of thinking about this. The problem of having infinite amounts of wrongness or infinite amounts of rightness doesn’t act as an argument in favor of one moral view over another. It just breaks expected utility theory. Because some probability of infinitely positive value, some probability of infinitely negative value, you try to take the sum product over that you end up just with undefined expected value.

There are other ways, I think, you can get… A similar problem, which I call a class of infectiousness problems, where you’ve got small probabilities in really wacky moral views. The issue is the wackiness of that moral view infects the whole expected utility calculation.

Robert Wiblin: What’s an example of that?

Will MacAskill: Another example would be the problem of infectious incomparability where some moral views say that different sets of values are just absolutely incomparable. If you’re choosing between improving the lives of people and preserving the natural environment, there’s at least some views that say, “Both of these are of value, but they’re absolutely incomparable.” Any time you make a trade off between improving people’s lives at the expense of preserving the environment, there’s just no fact of the matter about whether that’s good or bad. Because you’re trading off 2 radically different sorts of values. The issue is if you have even a small probability in that being the case – of these two things being undefined in value, essentially – and then again you try to use expected value theory, even if you’ve got a very small probability in…

Robert Wiblin: Incomparability

Will MacAskill: …in incomparability, the expected value is also incomparable. It’s also undefined which action has the highest expected value because of the tiny chance that they’re radically incomparable.

Robert Wiblin: I guess one way of thinking about it is almost every moral theory has some kind of strange outcomes. Some of them fit oddly with expected value theory. If all of them have some non-zero probability of being true, when you try to put them all together into some moral uncertainty framework you end up with the bugs of all of them in your theory all at once.

Will MacAskill: Yeah, exactly.

Robert Wiblin: Interesting. Could you end up with the problem of having 2 basically identical theories of morality but one of them says that everything is more important than the other? So, you’ve got like classical utilitarianism version 1 which gives the same rank ordering as classical utilitarianism version 2, but version 2 says everything is 100 times more important than version 1?

Will MacAskill: Great. I think this is actually a key issue in the debate of moral uncertainty. There’s 2 ways that you can go. I call this the distinction between structuralists and non-structuralists. Giving a bit of context, going back to this question of inter-theoretic comparisons of value. What the structuralists want to do is say, “Okay, we look at all of these different moral theories, and then we just look at some structural features of the different theories’ value functions.”

One naïve way of doing it might be you look at what’s the best option and the worst option across all different moral views, and then you say, “Okay, I’m going to let all of those be equal so that every option’s best option and worst option is equally good. That’s how I make comparisons of value or choice worthiness across different theories…” That obviously doesn’t work for theories that are unbounded, that have no best or worst. It’s not a very good proposal.

A better one was suggested by Owen Cotton-Barratt. He actually proved some interesting results for suggesting that the best structural account is normalizing different moral views at the variance of their value function. Saying for every different moral view, the mean value or choice worthiness of options is the same, and one standard deviation of goodness or choice worthiness is the same across all moral views.

Note, if you have that way of making comparisons of choice worthiness across different moral views, you can’t have this idea that different moral theories can be identical in the structure or of their ranking of options. Let’s say that one is just much more important than another.

Robert Wiblin: Because it just gets down weighted, basically.

Will MacAskill: Because it would just get down weighted. Because this utilitarianism 1 and utilitarianism 2, where utilitarianism 2 allegedly thinks that everything is 10 times more important, if you’re normalizing at the variance then their mean and standard deviation would be the same. It would mean that actually it’s not really 10 times as important. Everything would be equally as important.

However, I actually think that is a coherent possibility. I think you can have…

Robert Wiblin: It’s very convenient just to wish that away.

Will MacAskill: Yeah, but I think actually there are good arguments for thinking that you can have one moral view that’s identical in what’s called its cardinal structure. It’s exactly the same ranking of options in terms of good or bad or wrong they are, but one just thinks that everything is way more important than others. I think there’s a few ways of seeing this, but the key thing is to appreciate that different moral views can differ not just in how they rank options, but also in what philosophers call their rightmakers, which is the metaphysical grounds for thinking that certain entities have value or not.

Robert Wiblin: It’s the thing that makes it true.

Will MacAskill: The thing that makes it true. Exactly. Imagine someone starts off with a partialist consequentialism. I’m a utilitarian, or consequentialist in so far as I just want to maximize the good. I think that my friends and my family count 100 times as much as the welfare of distant strangers. Suppose that person revises their view such that they have everyone being equally weighted. It seems like there’s 2 ways of revising that view. One is to think, “Oh, I had these special obligations with respect to my friends and family. Actually, that just doesn’t make any sense. All that matters is improving the amount of welfare in the same way as I would think about strangers.” That person down weights how much they care about their friends and family. Let’s call that Benthamite utilitarianism.

Second way, though, that she could change their views is by thinking, “Oh, no, actually I’m kind of like a brother to all. It’s arbitrary that I have these close relationships with my friends and family and not with strangers. If only I could get to know them I would have the same sort of relationships.” That means that, if I fail to benefit a distant stranger, I’m actually kind of doing 2 things wrong. One is that I’m failing to promote their welfare in this impartial sense. Also, I’m violating this special bond that I have to each and every person. You call that a kinship utilitarianism.

Robert Wiblin: So it’s double wrong.

Will MacAskill: It’s double wrong. That’s right. There’s 2 things making the action wrong. It’s just that those 2 things always go in step. I think there’s other ways in which you can see…

That suggests that we can distinguish these 2 moral views because they have 2 different facts. They have 2 different features, right makers. Unless we’re already committed to this structuralist view, it seems like there’s no reason why we would think that they have to then equal each other in terms of strength of rightness or wrongness.

You can think of other things, like you might have different credences in these 2 different moral views, for example. You think, “Wow, it’s totally absurd to think that special obligations exist, let alone to think special obligations exist to everyone.”

Robert Wiblin: Yeah.

Will MacAskill: You might just think that’s so implausible. It also seems like what attitudes might be fitting can vary between these two views as well. Consequentialists don’t care much about fitting attitudes, but many other philosophers do. Supposing you start off with this partialist consequentialism, and then you move to the Benthamite view. It might be fitting to be kind of sad. It might be fitting to think, “Wow, there’s just way less value in the universe than I thought.”

Robert Wiblin: Because I don’t have special obligations anymore.

Will MacAskill: That’s right. This special relationship I thought I had…

Robert Wiblin: Special obligations to everyone, but special obligations, nonetheless.

Will MacAskill: Yeah. That’s right. If you have this view that… In the first view you might just think, “Wow, it’s kind of sad that my morality is so thin. It’s just about promoting welfare, but kind of impartially considered.” Whereas in the second view you might think, “Wow, oh my God.” This actually might be kind of inspiring, might be fitting to feel that way where suddenly you feel much more connected to everyone else. Finally, I think unless you’re question begging, it seems to make a difference under moral uncertainty 2.

I think there are independent arguments against structuralist views which, I think, give further support to the idea that you might have what I call amplified theories.

Robert Wiblin: Is this like the 2 envelope paradox? It’s really reminding me of that.

Will MacAskill: You do get 2 envelope style problems under moral uncertainty, but that would determine more what credences you give to different views rather than whether this is possible. Here’s a case which is okay, I think, that humans are super valuable and non-human animals are not very valuable. Then I move to a view where I say they’re equally valuable. Then, you might say, “Whoa, so animals are just way more important than I thought.” It could also be the case that you think, “No, humans are just less important.”

Which way you go makes a big difference to how you handle uncertainty between those 2 different moral views. It’s only if you’re boosting up the value of animals do you…

Robert Wiblin: That that will then start to dominate…

Will MacAskill: That starts to dominate. If you don’t, then it doesn’t. What credence you assign to those 2 different views has at least a 2 envelope-y sort of feel to it.

Robert Wiblin: If you haven’t heard of the 2 envelope paradox it’s a real treat. I’ll stick up a link to that, and you should definitely check it out. It’s one of my favorite paradoxes.
Why do you call it structuralist view? Where does the structure come into it?

Will MacAskill: The idea is it’s only looking at structural features of a moral theory, what I call choice worthiness function

Robert Wiblin: What’s a structural feature?

Will MacAskill: . Structural feature like the mean, or the range, or one standard deviation. The idea is all I need to give you is options and numbers, basically. Then you have everything you need to know.

Robert Wiblin: Okay. Like moral philosopher statistician, basically.

Will MacAskill: Yeah, basically, yeah. Perhaps they’re not the best terms, but…

Robert Wiblin: Okay. To begin with this all sounded very promising. We’re just going to do the math. Now it sounds like we’re in a swamp here. We’ve got lots of different problems. We’ve got fanaticism. We don’t know how to compare between them. I guess there’s also how do you compare between different meta-ethical theories. What if there’s a theory that says nothing matters. What do you do with that.

Will MacAskill: Yeah. There’s meta-ethical uncertainty as well. Uncertainty about the nature of morality.

Robert Wiblin: Meta-meta-ethical uncertainty too? Does it just keep going up?

Will MacAskill: Well, you can definitely have uncertainty about how you make decisions under moral uncertainty.

Robert Wiblin: Okay.

Will MacAskill: There’s 2 ways to go there. One is to say, moral uncertainty is what you rationally ought to do given uncertainty about what you morally ought to do. That makes sense because you’re talking about 2 different types of oughts. There’s rationality and there’s morality.

Then, when you go up a higher level, you’re saying what rationally ought you to do given you’re uncertain about what you rationally ought to do. You might go one of 2 ways. You might say, “Well, that’s just getting into nonsense.” You can’t have a recursive function like that. But, that feels kind of arbitrary. You might instead want to say, no, you just keep going higher and higher up this level until you get some sort of convergence. Or, maybe you don’t get convergence, and there’s just no fact of the matter about what you ought to do. I’m most sympathetic to that latter view, and also wrote a paper about the relevance of uncertainty about rationality, or decision theoretic uncertainty in the context of Newcomb’s problem.

Robert Wiblin: Yeah, I guess in defense of that perspective, that seems to show up again and again in philosophy. You get a justification that creates a need for another justification, on and on indefinitely.

Will MacAskill: That’s right. What philosophers have converged on is the idea that there’s no fully internal justification. At some point you always need to have a principle that is justified, whether or not you believe it to be justified. Suppose what I suggested, which is you’ve got all of these higher levels. If they converge on a particular answer, then go with that. If they don’t then maybe there’s no fact to the matter.

What if someone doesn’t believe that principle? The whole principle I just gave. What should they do then? Maybe you say, “Okay, well, you’ve got to take that into account, and take that into account.” In some even more elaborate way. Every time I’m giving a certain account I’m suggesting this is what you rationally ought to do whether or not you believe it’s what you rationally ought to do.

I think what’s true is at some point you have to have what are called external norms, which are things that you ought to do whatever you think about them.

Robert Wiblin: Huh. We’ll come back to moral uncertainty, specifically in a minute. My understanding with these kind of infinite regress cases was you could either just bite the bullet and say, yes, you have an infinite regress. Do these things just keep justifying one another forever? Or you can have a circle. A justifies B, which justifies C, which justifies A. Or, you have some bedrock principle that absolutely is true and doesn’t require justification. Are you saying philosophers have tended toward that last view? They don’t want to accept infinite regress cases as acceptable, that’s the way the world is?

Will MacAskill: Yeah. At least, there’s 2 understandings of that latter view. One of which is, this principle is just self evidently justified. That’s what Descartes thought about the existence of God, which then was a bedrock for his whole epistemology , that you can reflect on the idea of God and see that he must exist. No one thinks this is a very good argument. But that was his kind of hope.

Robert Wiblin: I’m sure some people do.

Will MacAskill: It’s kind of like the proposition itself is self justifying. The much more common view is to be an externalist, which is to say, “I can be justified in believing, for example, that I have a hand, even if I’m not justified…” I need to be careful to get this statement right. “I can be justified in believing that I have a hand, even if I’m not justified in believing that I’m justified in believing I have a hand.” The idea is being on a certain causal connection with my hand gives me justification. That has nothing to do with my beliefs about justification. In a sense, that again stops the regress from the very beginning. Some external factor that stops the regress rather than some internal factor.

Robert Wiblin: Okay. Let’s go back to moral uncertainty decisions.

Will MacAskill: I’d be happy to talk about skepticism.

Robert Wiblin: That was starting to get above my head right there. I think we’ll come back to the simple stuff that I need to know to actually make decisions. We have this problem of inter-theoretic decision. Not being able to do that. Then we had a seeming solution where we’re going to normalize to know one theory can dominate. Then we’re going to get a weighting over the function in proportion to how likely they are.

Will MacAskill: Yeah.

Robert Wiblin: But, you don’t buy that. You’re not going to accept that, actually. So, where do we stand? What is your conclusion in your book?

Will MacAskill: Ultimately, for practical purposes, I think the way we choose how… Supposing we do accept the amplified theories, now there’s not really a question of inter-theoretic value comparisons. The question, instead, is how do you distribute your credences among all of these different possible varieties of utilitarianism. We started off with this. Utilitarian says kill 1 person to save 5. Non-consequentialist says don’t do that. The way I presented it was if we have 2 theories and you’re deciding how to compare the 2. Instead, you’ve got an infinite family of theories. One which says it’s very important, than it’s even more important, then it’s even more important, or less important, less important on both sides. The question of how to make inter-theoretic comparisons actually boils down to the question that’s much more like normal moral philosophy which is of these different moral views what credences are you to have in them.

I think at that point you can start to offer tips, heuristics, on what credences to have. I think it would be very weird, for example, if I have 2 moral views and they differ just on the extension of bearers of value. Let’s say one view is utilitarianism and they say only humans are of value. The second is utilitarianism-star that says humans and animals are of value. It would be really weird to have credence in those 2 different moral views that give humans a thousand times the weight because it seems like all they’re doing is differing on how many things are of value. What types of things are of value, and a principle of epistemic conservatism such that, if you’re going to modify the view, modify it in as few ways as possible. That’s the view that should have highest credence.

There’s actually kind of 2 questions of the inter-theoretic comparisons. One is the formal question of what makes sense. The second question… The formal question of how do you make sense of this. What makes it the case that these comparisons are meaningful? That’s a very philosophical question. The second, more practical question, “I’ve got these two theories, actually how do I compare them?” I’m saying that looks a lot more like first order moral philosophy than people have normally suggested it to be.

Just to clarify a little bit more, there’s an analogy between this question and the question of interpersonal comparisons of well being, where again I’ve got these 2 questions. I pinch you and punch someone else. We have an intuition of saying it’s worse to get punched, worse for this other person to get punched, than it is for you to get pinched. But then there’s 2 questions. One is, is that judgment true? Secondly, what makes it the case that this is true? Economists and philosophers have worked on that question for quite a while.

Robert Wiblin: Okay. I imagine some listeners heads’ are kind of spinning at all this moral philosophy. What are kind of the practical takeaways that people should use in actually making decisions about what’s moral given what we know now about…

Will MacAskill: I think there’s still lots of open questions in what the practical implications are, but broadly I would sum it up as follows: I think it means… I’m in particular going to talk to people who come from more of a consequentialist sympathetic utilitarian perspective. Firstly I think it means being very careful about violating rights. There’s 2 big ways in which consequentialism differs from non-consequentialism. One is that consequentialism says that the ends always justifies the means. There’s no side constraints on action. The second is that there’s no realm of the permissible. There’s no area that it’s okay for you to do whatever, kind of, you want.

Robert Wiblin: Everything that’s not prohibited is obligatory, basically.

Will MacAskill: Basically.

Robert Wiblin: Something is either prohibited or obligatory.

Will MacAskill: Basically, that’s right. For consequentialists, buying yourself a nice meal because it’s nice for you, even though you could use that money to do something else that would do more good, that would be wrong on consequentialism. I think between non-consequentialism, consequentialism, they each win one of those fights, as it were. In the case of violating side constraints in those kill 1 save 5 cases, and so on, in general you ought to go with a non-consequentialist view and not violate that side constraint. Because they think that’s really high stakes in a way that the consequentialist view doesn’t. Especially when we think about real life cases where… It’s actually completely rare to think of cases where the consequentialist is really keen on violating a side constraint, because there are normally good practical reasons for doing that too.

I think that the consequentialist wins the battle of the permissible because if I’m saying, “Well, I can either spend this money on myself of give it to somewhere where it’ll do more good,” the non-consequentialist will almost always say, “It’s permissible to do either.” Whereas, the consequentialist says, “No, it’s impermissible to spend on yourself. It’s obligatory to spend on others.” In which case the consequentialist is thinking it’s much more important than that non-cons… So, you end up with, in broadly, this consequentialism plus rights view. You ought to do as much good as you can constrained by not violating anyone’s rights.

Then, the second aspect is with respect to what you’re trying to maximize then on that consequentialist part. I think that means you should give yourself a very broad understanding of what’s valuable. Different views on population ethics differ in terms of maybe it’s only people that will definitely exist that you want to benefit, or that it’s moral important to benefit or to not harm. Many other views of population ethics say that, “Well, it’s actually good if you can bring more people with really positive lives into existing. That’s a morally important thing to do. You’re making the world better.”

I think in combination with the premise that there are just so many potential people in the future if the human race doesn’t go extinct. Trillions upon trillions of people in the future. That means that that kind of long term future view becomes very important. In effect, you might diminish the value of benefiting future people or bringing future people into existence, compared to benefiting present people now. You give present people a little bit more weight. But, because the stakes are so high, actually in general you should focus on the very long run future. Because there’s just so much potential value at stake, even if you don’t find population ethical views like the total view or other views that think that it’s good to bring happy people into existence and bad to bring very unhappy people into existence. Even if you don’t find them that plausible, because the stakes are so high you should really focus on that.

It also means… a very similar argument, I think, goes for killing non-human animals as well. Even if you don’t find it fairly plausible, there’s just so much you can do to benefit them in the suffering they have in the… Though I think when you do the numbers it’s still very small compared to the number of future creatures, humans and non-human animals.

Then, the third aspect, I think, is taking what I call the value of moral information very seriously. If you really appreciate moral uncertainty, and especially if you look back through the history of human progress, we have just believed so many morally abominable things and been, in fact, very confident in them. If you just look at…

Robert Wiblin: Slavery is a positive good. It’s the natural order of things. It has to be that way.

Will MacAskill: That’s right. Even for people who really dedicated their lives to trying to work out the moral truths. Aristotle, for example, was incredibly morally committed, incredibly smart, way ahead of his time on many issues, but just thought that slavery was a pre-condition for some people having good things in life. Therefore, it was justified on those grounds. A view that we’d now think of as completely abominable.

That makes us think that, wow, we probably have mistakes similar to that. Really deep mistakes that future generations will look back and think, “This is just a moral travesty that people believed it.” That means, I think, we should place a lot of weight on moral option value and gaining moral information. That means just doing further work in terms of figuring out what’s moral the case. Doing research in moral philosophy, and so on. Studying it for yourself.

Secondly, into the future, ensuring that we keep our options open. I think this provides one additional argument for ensuring that the human race doesn’t go extinct for the next few centuries. It also provides an argument for the sort of instrumental state that we should be trying to get to as a society, which I call the long reflection. We can talk about that more.

Robert Wiblin: Humanity should thrive and grow, and then just turn over entire planets to academic philosophy. Is that the view? I think I’m charitable there.

Will MacAskill: Yeah, obviously the conclusion of a moral philosopher saying, “Moral philosophy is incredibly important” might seem very self-serving, but I think it is straightforwardly the implication you get if you at least endorse the premises of taking moral uncertainty very seriously, and so on. If you think we can at least make some progress on moral philosophy. If you reject that view you have to kind of reject one of the underlying premises.

Robert Wiblin: I had a bunch of follow ups there. It sounds like there’s 2 kind of strong dominance arguments that you’d be willing to accept here. One is the deontological arguments against murder. You have to say, “Don’t murder except in very unusual circumstances, even if it looks good on other theories.”

Will MacAskill: Yeah.

Robert Wiblin: We want to keep humanity around and potentially increase the number of beings exist because that’s plausible under some utilitarian view.

Will MacAskill: Mm-hmm (affirmative).

Robert Wiblin: What about odd conclusions that you might get from virtue ethics or subjectivist theories or contractualism, or all of that. Are there other things that we’re ignoring here?

Will MacAskill: Yeah. I’ve given a very broad brush strokes kind of categorization of the landscape. In general the non-consequentialist… The broad brush strokes is between the non-consequentialist views and side constraints and the consequentialist… views that reject them. So, between various forms of virtue ethics and typical, what I call somewhat pejoratively no-theory deontologists, which is probably the most common view among moral philosophers. They don’t have any overarching moral theory but lots of pieces of non-consequentialist moral theory.

There aren’t so many things that those views endorse that are radically at odds with the consequentialist view beyond that…

Robert Wiblin: There’s no case where virtue ethics says, “You absolutely must be courageous in this situation, no matter what the consequences.”?

Will MacAskill: It’s not as extreme in that way. Some virtue ethicists say, “You have an obligation to perfect yourself, to work in your own skills.” But, that’s not a case where there’s going to be like a really big…

Robert Wiblin: Tension.

Will MacAskill: Firstly, if you don’t do that it’s not like they’re saying, “This is the worst thing ever.” It’s also not the case that there’s a really big tension between what you might want to do anyway. Those are the cases most interesting from the moral uncertainty perspective.

Robert Wiblin: Maybe you haven’t looked into this yet, but what about anything from continental philosophy. They do have any really strong arguments that might feature in the equation, even if you think that they’re fairly unlikely to…

Will MacAskill: There’s things related to the idea of exploitation, could be relevant. This is a view that I find very hard to understand or sympathize with is if you and I engage in a trade, let’s say, but I’m much richer than you are. The background is unjust because I have more resources than I ought to have. But, you’re perfectly rational, and so on, and you engage in a kind of voluntary… So, that’s making everyone better by their own weights. A lot of moral views, especially kind of informed by Marxist theory, might say that’s actually bad. Even though the situation has improved both parties, including the worst off…

Robert Wiblin: And it was all consensual.

Will MacAskill: And it was all consensual, that’s still wrong because it’s kind of exploitative.

Robert Wiblin: So, a consequentialist might say in that case, “Well, this isn’t the best thing that could happen because maybe the rich person should give the poor person all of their money.”

Will MacAskill: Yeah.

Robert Wiblin: But, on some of the consensual theories they might say, “It’s actually worse for them to interact and do this trade than to do nothing at all.”

Will MacAskill: Yeah. So, there’s a variety of things you can say to kind of make more sense of this. The consequentialist says, “Maybe focusing on how good this is is meaning that you’re not paying attention to the real thing, which is the unjust background condition. Or it makes even more salient the unjust background condition.” But what the more Marxist inspired theorist might say is just saying, “That’s not really consensual. You having this choice in such a constrained circumstance where you’re so badly off, you just can’t make a consensual choice because your options are already so limited.” That’s why it’s wrong.

What I want to do is put that kind of into the side constraints bucket as well. If I was a corporation, let’s say, and let’s say I am earning to give as a coffee producer. Starbucks or something. Even in the case I’m planning to donate all of my profits, therefore I calculate that I could do more good by paying you the minimum that I can get away with and then donating the excess. I think under moral uncertainty I shouldn’t do that. I should pay you more, what’s closer to a fair wage, reducing the amount of good I can do but ensuring that I avoid violating the side constraint exploiting. That’s one case.

Other times a lot of continental philosophers are engaged in a very different project.

Robert Wiblin: And it’s not clear how you would bring them into some comparable…

Will MacAskill: That’s right, yeah. I think a lot of continental philosophers would just think, “As soon as I’ve started to model morality using the analytical tools I’m borrowing from economics and so on, I’m already completely on the wrong path. Morality’s not an algorithm. It’s about using your judgment in particular cases. Trying to impose formal structure is just automatically going to lead you in the wrong direction.”

This, again, comes from this meta-moral uncertainty of maybe this whole formal framework is totally the wrong way to go. Again, it’s just not really clear what you’d do under that circumstance.

Robert Wiblin: Speaking of which, a thing that I hear from a lot of people is that they just don’t believe that anything is right or wrong. Does the moral uncertainty stuff apply to them as well?

Will MacAskill: Yeah, I think it does. There’s an interesting argument in this area, which is: suppose you just have this view. Even, suppose you’re pretty confident in it, 90% confident, that nothing is right and nothing is wrong. You’re a nihilist or error theorist as philosophers would say. Well, now suppose you’re actually making decisions. Should you give some of this money to cost effective charity. Well, 90% of your credence says doesn’t matter either way. You don’t do anything wrong if you do, but you don’t do anything wrong if you don’t either. That doesn’t really weigh in the reasons about what you should…

But you’ve got this 10% credence that no, there are facts to the matter about what’s right and what’s wrong. It says, let’s suppose, that it’s really important to give the money to the effective charity rather than keep it for yourself. Then, there’s nothing to be lost by doing so. There’s nothing to be lost by acting as if it is the case, or taking as a practical assumption that it is the case that some things are right and wrong.

Robert Wiblin: Because nothing is fallible, and nothing is wrong and nothing is right.

Will MacAskill: Yeah, cause from the nihilist perspective you’re not doing anything wrong. There’s just no reason to do anything. From the moral realist perspective, or understood broadly as just saying there are some things that are right and some things are right, it’s very important not to do that. There’s a kind of dominance argument. I actually think – you know, I’ve written about this topic – I actually think that argument that I’ve given you is much harder to make out formally. It sounds very simple. I think it’s actually very complex. I have this belief that that argument does work.

Robert Wiblin: That you’ll find a way.

Will MacAskill: Yeah. The difficulties of making it work is just a matter of ironing out some bugs in some philosophy. I do think it’s actually, the argument isn’t simple.
Robert Wiblin: That’s that paper, The Infectiousness of Nihilism, right?

Will MacAskill: That’s right. Yeah.

Robert Wiblin: Yeah, I’ll stick up a link to that. Before, you mentioned that if humanity doesn’t go extinct in the future, there might be a lot time and a lot of people and very educated people who might be able to do a lot more research on this topic and figure out what’s valuable. That was a long reflection. What do you think that would actually look like in practice, ideally?

Will MacAskill: Yeah. The key idea is just, different people have different sets of values. They might have very different views for what does an optimal future look like. What we really want ideally is a convergent goal between different sorts of values so that we can all say, “Look, this is the thing that we’re all getting behind that we’re trying to ensure that humanity…” Kind of like this is the purpose of civilization. The issue, if you think about purpose of civilization, is just so much disagreement. Maybe there’s something we can aim for that all sorts of different value systems will agree is good. Then, that means we can really get coordination in aiming for that.

I think there is an answer. I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.

In just the same way as if you think as an individual, how much time should you reflect in your own values before choosing your career and committing to one particular path.

Robert Wiblin: Probably at least a few minutes. At least .1% of the whole time.

Will MacAskill: At least a few minutes. Exactly. When you’re thinking about the vastness of the potential future of civilization, the equivalent of just a few minutes is tens of thousands of years.

Then, there’s questions about how exactly do you structure that. I think it would be great if there was more work done really fleshing that out. Perhaps that’s something you’ll have time to do in the near future. One thing you want to do is have as little locked in as possible. So, you want to be very open both on… You don’t want to commit to one particular moral methodology. You just want to commit to things that seem extremely good for basically whatever moral view you might think ends up as correct or what moral epistemology might be correct.

Just people having a higher IQ but everything else being equal, that just seems strictly good. People having greater empirical understanding just seems strictly good. People having a better ability to empathize. That all seems extremely good. People having more time. Have cooperation seems extremely good. Then I think, yeah, like you say, many different people can get behind this one vision for what we want humanity to actually do. That’s potentially exciting because we can coordinate.

It might be that one of the conclusions we come to takes moral uncertainty into account. We might say, actually, there’s some fundamental things that we just can’t ultimately resolve and so we want to do a compromise between them. Maybe that means that for civilization, part of civilization’s devoted to common sense, thick values of pursuit of art, and flourishing, and so on, whereas large parts of the rest of civilization are devoted to other values like pure bliss, blissful state. You can imagine compromise scenarios there. It’s just large amounts of civilization… The universe is a big place.

Robert Wiblin: So if we can’t figure out exactly the one thing that’s definitely valuable then we could do a mixture of different things.

Will MacAskill: That’s right.

Robert Wiblin: Alright. That was quite a while on moral uncertainty, but we’ve kind of only scratched the surface of what I imagine is in the book. When’s the book coming out?

Will MacAskill: Probably still a while.

Robert Wiblin: Okay.

Will MacAskill: Well, I need to finish the thing and submit it, and then it will go through peer review. So maybe, I think, a year to a year and a half is when the book will actually come out. It’s an academic book, so if your mind was reeling from some of the more theoretical moral philosophy that we were talking about, there’s a lot more of that in the book.

Robert Wiblin: We have a very smart audience, Will.

Will MacAskill: I’m sure.

Robert Wiblin: I’m sure they’ll all go out and buy it and fill up the auditoriums on your tour.

Will MacAskill: I bet.

Robert Wiblin: What’s the book going to be called? Do you know yet?

Will MacAskill: Moral Uncertainty.

Robert Wiblin: Moral Uncertainty. Okay. You’re really staking out the territory. Okay.
Taking moral uncertainty seriously is slightly unusual in philosophy. At least taking it quite this seriously. What are some of the most unusual philosophical positions that you hold, relative to your profession anyway?

Will MacAskill: In general, I form beliefs that take disagreement among my peers very seriously. That means that in general my views are actually moderate with respect to other philosophers. My credence in consequentialism versus non-consequentialism is actually about 50/50, even if I find the arguments for consequential positions more compelling in general. Loads of people disagree with me, so I take that into account.

But then, when I’m in the seminar room it’s a little bit different. I will be more inclined to stake my position because the question isn’t, in the seminar room, what exactly are the correct views after updating on disagreement. It’s about trying to work out the merits of different views, just taking as a first order question. There I’m probably most distinctive in thinking that people aren’t taking seriously enough the arguments for classical utilitarianism, which is the utilitarian theory of the good that says you just add up well-being across different people, hedonism as a theory of welfare, so the only things that are good or bad are conscious states, then also the total view of population ethics, so saying that the goodness or badness of the state is given by the total well-being, where you can increase the amount of well-being in the world by adding people as well as by improving the lives of people who are already there.

I think there’s a number of arguments in favor of this. Moral philosophy at the moment, I think, is either often not fully aware of how strong the arguments are, or dismissive of utilitarianism for weak reasons.

Robert Wiblin: Okay. It seems like through history the popularity of utilitarianism has kind of waxed and waned.

Will MacAskill: Mm-hmm (affirmative).

Robert Wiblin: There’s a period in the ancient Greek world where it was quite a popular view. It seems like when Bentham was writing it was quite a fashionable view. It’s currently not super fashionable. It hasn’t been so much lately.

Will MacAskill: That’s right. In terms of at least recent history, there’s Bentham and Mill, and then Sidgwick in the late 19th and early 20th centuries. In the early 20th century it was extremely common. It was basically the dominant view among moral philosophers.

Then, there was a dry patch with respect to moral philosophy because a meta-ethical view called emotivism or prescriptivism had become very dominant, which just said that when we do moralizing we’re just expressing our attitudes towards… We’re just saying “yuck” to murder and “hurray” to giving to charity.

Robert Wiblin: Then there’s not a lot of substance today.

Will MacAskill: Then there’s just not much to do because it’s just that we’re cheering for different football teams. There’s not really a subject matter of moral philosophy.

The big change really happened in the early 70s with John Rawls publishing A Theory of Justice. That’s not the only change, but he really re-invigorated the idea that you could do moral philosophy. That partly came from a different meta-ethical view which was kind of constructivism. Thinking of what you’re thinking morally is like linguistics. Where linguistics you just take all of these intuitions that we have about what things are appropriate to say and what are not, grammatically speaking, and then you build a set of rules around to make sense of that. He thinks that we could do the same for morality. Even though there’s no ultimate fact about what is morally correct or not, you can still come to a better understanding of what my, or at least our, moral intuitions are and moral framework is. In the same way as there genuinely is a discipline of linguistics, the results of that are non obvious.

That was the first thing he did. Secondly, he had a bunch of trenchant arguments against utilitarianism. And finally suggested this methodology of reflective equilibrium, where the methodology for making progress in moral philosophy and political philosophy is to go in a back and forth between intuitions that you have about particular cases, like don’t kill 1 person to save 5, and more theoretical judgments that also seem plausible. If you can make some people better off, and no one worse off, that thing is a good thing to do. That would be theoretical judgements, not about particular case, that seems very compelling.

Since Rawls, the dominant paradigm has been this, again we kind of call, a no-theory deontology. Non-consequentialism’s become much more widely accepted. Now the ratio or philosophers is maybe 2/3 non-consequentialist to 1/3 consequentialist. My view is that, as philosopher’s understanding of meta-ethics has changed quite a bit since Rawls, now moral realism, and even quite staunch moral realism, is much more widely endorsed. I think what hasn’t happened is that philosophers haven’t seen the link between meta-ethics and moral methodology, and from there the move to what first order normative theories you should endorse. I think if they did see that more clearly, that would result in a greater support for views that are more at odds with common sense like classical utilitarianism.

Robert Wiblin: Alright, straight out. What are the arguments for classical utilitarianism?

Will MacAskill: I think there’s at least half a dozen that are very strong.

Robert Wiblin: They don’t all have to work then.

Will MacAskill: True, yeah.

Robert Wiblin: Got a bunch of options.

Will MacAskill: One that I think doesn’t often get talked about, but I think actually is very compelling is the track record. When you look at scientific theories, how you decide whether they’re good or not, well significant part by the predictions they made. We can do that to some extent, got much smaller sample size, you can do it to some extent with moral theories as well. For example, we can look at what the predictions, the bold claims that were going against common sense at the time, that Bentham and Mill made. Compare it to the predictions, bold moral claims, that Kant made.

When you look at Bentham and Mill they were extremely progressive. They campaigned and argued for women’s right to vote and the importance of women getting a good education. They were very positive on sexual liberal attitudes. In fact, some of Bentham’s writings on the topic were so controversial that they weren’t even published 200 years later.

Robert Wiblin: I think, Bentham thought that homosexuality was fine. At the time he’s basically the only person who thought this.

Will MacAskill: Yeah. Absolutely. Yeah. He’s far ahead of his time on that.

Also, with respect to animal welfare as well. Progressive even with respect to now. Both Bentham and Mill emphasized greatly the importance of treating animal… They weren’t perfect. Mill and Bentham’s views on colonialism, completely distasteful. Completely distasteful from perspective for the day.

Robert Wiblin: But they were against slavery, right?

Will MacAskill: My understanding is yeah. They did have pretty regressive attitudes towards colonialism judged from today. It was common at the time. That was not something on the right side of history.

Robert Wiblin: Yeah. Mill actually worked in the colonial office for India, right?

Will MacAskill: That’s right, yeah.

Robert Wiblin: And he thought it was fine.

Will MacAskill: Yeah, that’s right.

Robert Wiblin: Not so great. That’s not a winner there.

Will MacAskill: Yeah. I don’t think he defended it at length, but in casual conversations thought it was fine.

Contrast that with Kant. Here are some of the views that Kant believed. One was that suicide was wrong. One was that masturbation was even more wrong than suicide. Another was that organ donation is impermissible, and even that cutting your hair off to give it to someone else is not without some degree of moral error.

Robert Wiblin: Not an issue that we’re terribly troubled by today.

Will MacAskill: Exactly, not really the thing that you would stake a lot of moral credit on.

He thought that women have no place in civil society. He thought that illegitimate children, it was permissible to kill them. He thought that there was a ranking in the moral worth of different races, with, unsurprisingly, white people at the top. Then, I think, Asians, then Africans and Native Americans.

Robert Wiblin: He was white, right?

Will MacAskill: Yes. What a coincidence.

Robert Wiblin: Fortunate coincidence I suppose for him.

Will MacAskill: I don’t want this to be a pure ad-hominem attack on Kant because there’s an underlying lesson to this which is when we look at a history of moral thought and we look at all the abominable things that people have believed and even felt very strongly about, we should think, “Well it’d be extremely unlikely if we’re not in the same circumstance.” We probably as common sense believe lots of truly abominable things. That means that if we have a moral view that’s all about catering to our common sense intuitions we’re probably just enshrining these biases and these moral errors.

What we want to have instead is a moral view that criticizes common sense so that we can move beyond it. Then when you look at how utilitarianism has fared historically it seems to have done that in general, not always, but in general done that very well. That suggests that that progress might continue into the future.

Robert Wiblin: And that in as much as the conclusions are surprising to us now, well the conclusions from the past were surprising people in the past, but we agree with them now. So, we shouldn’t be too surprised.

Will MacAskill: Absolutely.

Robert Wiblin: Okay. That was argument 1 of 6. I might have to keep you to 3 so we can finish today.

Will MacAskill: Yeah.

Robert Wiblin: What are the other best 2 arguments for utilitarianism?

Will MacAskill: The other best 2 I think are, one is Harsayni’s Veil of Ignorance argument. The second is the argument that moves from rejecting the notion of personhood. We can go into the first one, Harsayni’s Veil of Ignorance. John Harsayni was an economist but also a philosopher. He suggested the following thought experiment: Morality’s about being impartial. It’s about taking a perspective that’s beyond just your own personal perspective, somehow from the point of view of everyone, or society, or point of view of the universe.

The way he made that more precise is by saying, “Assume you didn’t know who you were going to be in society. Assume you had an equal chance of being anyone. Assume, now, that you’re trying to act in a rational self-interested way. You’re just trying to do whatever’s best for yourself. How would you structure society? What’s the principle that you would use in order to decide how people do things as this perspective of the social planner.” He proved that if you’re using expected utility theory, which we said in the past earlier is really well justified as a view of how to make decisions under empirical uncertainty, and you’re making this decision, the rule you’ll come to is utilitarianism. You’ll try and maximize the welfare of everyone, of the sum total of welfare in society.

Robert Wiblin: Because you care about each of those people equally because you could be each of them with equal probability.

Will MacAskill: Exactly. That’s right.

Robert Wiblin: That suggests that when Rawls was saying behind a Veil of Ignorance you maximize the welfare of the person who’s worst off, that he was mistaken about that?

Will MacAskill: Great.

Robert Wiblin: At least in a mathematical sense?

Will MacAskill: I do think he was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit.

Rawls’s reasoning was as follows: Utilitarianism is false, therefore we can infer that the setup that Harsayni chose was also false. Was wrong in some way.

Robert Wiblin: He just thought that this was self evident.

Will MacAskill: That’s right. He could just appeal to intuitive cases and so on where utilitarianism conflicts so badly with moral intuitions. But, he thought he was attracted to this Veil of Ignorance argument as a form of argument. He tweaked the initial setup instead. He said that rather than knowing the probability of being each person in society, all the information you get behind this Veil of Ignorance is that there is some person at a given level of welfare. You don’t know how many people are at that level of welfare. It’s in a more impoverished informational state. You can say, “Okay, I know that someone is at welfare level 4, and someone else is at welfare level 100, but I don’t know how many people are at welfare level 4, and how many are at welfare level 100.”

Robert Wiblin: Okay.

Will MacAskill: I think there are 2 big problems with this. One is that this seems really unmotivated. It seems like the natural way of setting up this Veil of Ignorance is that you know the chance of being everyone because everyone counts equally. That’s, again, the part of the thought of impartiality in moralities. Everyone counts equally. Whereas, if you just are forced to be blind about the fact that 100 people are at 1 level of welfare, and only 1 at another, that seems like you’re not counting the views, the weights of the 100 as well as you can for the weight of the 1.

The second is then by looking at the implications of the view that he comes to. What he argues is that in conditions of ignorance of probabilities all you know is that you might end up at this level of welfare, you might end up at this other level of welfare, but you don’t know the probabilities. You’d use a decision called maximin, where you ensure that the worst possible outcome you can end up in is as good as possible. This would entail the moral rule. Again, this is provable. From this set up this strictly follows. That would entail a moral rule of called leximin, which is where you structure society so that the worst off person in society is as well off as you can make them.

That might, you know, sounds good. Sounds kind of egalitarian, but has incredibly extreme implications. Supposing I can have 2 worlds. The first is where everyone in the world, literally everyone, all 7 billion people, have incredibly good lives, incredibly well off, but there’s one person who’s badly off, single person. In the second world, where everyone is as badly off as the worst off person. All those 7 billion people just have also really terrible lives. But, that worst off person in the first world just has $1 more, just slightly better off. Leximin, Rawl’s view, would entail that later distribution rather than that former distribution.

Robert Wiblin: Which is perverse.

Will MacAskill: Which is perverse. That just seems way more absurd to me than the utilitarian conclusions.

Robert Wiblin: But, he must have immediately noticed that this was a conclusion of his theory, right? It’s not really any more common sense than what he was trying to reject.

Will MacAskill: That’s my view, but he stood by this view. He did certain sorts of philosophical wrangling that make the view far less theoretically compelling and far more arbitrary. It wasn’t the case that you literally look at the worst of member of society. Instead of you look at the representative of the working class. You take some average among the bottom 10% or 20% and you assure that they’re as well off.

Robert Wiblin: Okay…

Will MacAskill: Which is now just throwing away all the theoretical elegance that we had behind this view or this argument. You’ve suddenly got this view that sounds a bit like the first view you said, but isn’t what’s entailed by your original setup.

Robert Wiblin: It sounds like he started with political convictions and then was looking for a theory to justify it. Is that fair?

Will MacAskill: I think that’s fair. The reason I think that’s fair is that he’s saying, “Well this is reflective equilibrium. We’re going between intuitions about particular judgements and theoretical considerations, and kind of going back and forth between the 2. In that sense…

Robert Wiblin: It would concede that that was part of what was going on.

Will MacAskill: Yeah, I think that’d be a fair way to…

Robert Wiblin: Okay. And the third argument for utilitarianism?

Will MacAskill: Right. The third argument is… I should say in all of these cases there’s more work you need to do to show that this leads exactly to utilitarianism. What I’m saying, these are the first steps in that direction.

The third argument is rejecting the idea of personhood, or at least rejecting the idea that who is a person, and the distinction between persons is morally irrelevant. The key thing that utilitarianism does is say that the trade offs you make within a life are the same as the trade offs that you ought to make across lives. I will go to the dentist in order to have a nicer set of teeth, inflicting a harm upon myself, because I don’t enjoy the dentist, let’s say, in order to have a milder benefit over the rest of my life. You wouldn’t say you should inflict the harm of going to the dentist on one person intuitively in order to provide the benefit of having a nicer set of teeth to some other person. That seems weird intuitively.

Robert Wiblin: It would be a very weird dental office.

Will MacAskill: It would be a weird dental office.

Robert Wiblin: Setting that aside.

Will MacAskill: Setting that aside, yeah. Now suppose that we reject the idea that there is a fundamental difference between me now and you now, whereas there’s not a fundamental difference between me now and me age 70. Instead, maybe it’s just a matter of degree, or maybe it’s just the fact that I happen to have a bundle of conscious experiences that is more interrelated in various ways by memory and foresight than this bundle is with you. There are certain philosophical arguments you can give for that conclusion. One of which is what get called fission cases.

Imagine that you’re in a car accident with 2 of your siblings. In this car accident your body is completely destroyed, and the brains of your 2 siblings are completely destroyed, but they still have functioning bodies, are preserved. As you’ll see, this is a very philosophical thought experiment.

Robert Wiblin: One day maybe we can do this.

Will MacAskill: Maybe. Finally, let’s also suppose that it’s possible to take someone’s brain and split it in 2, and implant it into 2 other people’s skulls such that the brain will grow back fully and will have all the same memories as that first person did originally. In the same way I think it’s the case that you can split up a liver and the 2 separate livers will grow back, or you can split up an earthworm – I don’t know if this is true – split up an earth worm and they’ll both wiggle off.

Robert Wiblin: Maybe you could.

Will MacAskill: Maybe you could. You’ve got to imagine these somewhat outlandish possibilities, but that’s okay because we’re illustrating a philosophical point. Now you’ve got these 2 bodies that wake up and have all the same memories of you. From their perspective they were just in this car crash and then woke up in a different… The question is, who’s you? Supposing we think there’s this Cartesian soul that exists within one of us, the question would be into which body does the soul go? Or, even if you don’t think there’s a soul but you think, no, there’s something really fundamental about me. Who’s the me?

There’s 4 possible answers. One is that it’s one sibling. Second is it’s the other sibling. Third is it’s both. Fourth is it’s neither. It couldn’t be one brother or one sibling over the other because there’s a parity argument. Any argument you give for saying it’s the youngest sibling would also give an argument to the oldest sibling. That can’t be the case. It can’t be that it’s both people because, well, now I’ve got this person that consists of 2 other entities walking around? That seems very absurd indeed. It can’t be neither either.

Now imagine the case where you’re in a car crash and your brain just gets transplanted to one person. Then you would think, well, we continue. I was in this terrible car crash, I woke up with a different body, but it’s still me. I still have all the same memories. But, if it’s the case that I can survive in the case of my brain being transplanted into one other person, surely I can survive if my brain is transplanted into 2 people. It would seem weird that a double win, double success, is actually a failure.

And so, tons more philosophical argument goes into this. The conclusion that Derek Parfit ultimately makes is, there’s just no fact of the matter here. This actually shows that what we think of as this continued personal identity over time is just a kind of fiction. It’s like saying when the French Socialist party split into two, are there now two? Which one is really the French Socialist party? This is just a meaningless questions.

Robert Wiblin: What’s actually going on is that there are different parties, and some of them are more similar than others.

Will MacAskill: Exactly. That’s right. But, once you reject this idea that there’s any fundamental moral difference between persons, then the fact that it’s permissible for me to make a trade off where I inflict harm on myself now, or benefit myself now in order to perhaps harm Will age 70… Let’s suppose that that’s actually good for me overall. Well, I should make just the same trade offs within my own life as I make across lives. It would be okay to harm one person to benefit others. If you grant that, then, you end up with something that’s starting to look pretty similar to utilitarianism.

Robert Wiblin: Okay, so the basic idea is we have strong reasons to think that identity doesn’t exist in the way that we instinctively think it does, that in fact it’s just a continuum.

Will MacAskill: Mm-hmm (affirmative).

Robert Wiblin: This is exactly what utilitarianism always thought and was acting as though it was true.

Will MacAskill: Yes.

Robert Wiblin: But for deontological theories or virtue ethics theories, they really need a sense of identity and personhood to make sense to begin with.

Will MacAskill: That’s right. Another way of putting it is most non-utilitarian views require there to be personhood as a fundamental moral concept. If you think that concept is illusory, and there seem to be these arguments to show that it is illusory, you have to reject those moral views. It would be like saying we’re trying to do physics, but then denying that electrons exist or something. You have to reject the underlying theory that relies on this fundamental concept.

Robert Wiblin: Okay. Those are your 3 best arguments for utilitarianism. What are the best arguments against it?

Will MacAskill: The best arguments against are how it conflicts with common sense intuitions. Sometimes you get utilitarian apologists who try to argue… Henry Sidgwick was like this. Try to argue that, actually, utilitarianism doesn’t differ so much from common sense at all. I think that’s badly wrong. I think you can come up with all sorts of elaborate thought experiments, like, what if you can kill 1 person to save 5, and there’s no other consequences. You’ll get away, and so on.

I think you should take those thought experiments seriously, and they do just conflict with common sense. I think it also conflicts in practice as well. In particular on the beneficent side where most people think it’s not obligatory to spend money on yourself. They think that’s fine.

Robert Wiblin: But it’s not prohibited.

Will MacAskill: Yeah, that’s right, sorry. It’s not prohibited to spend money on yourself. Whereas, utilitarianism says, “No, you have very strong obligations, given the situation you’re in at the moment, at least if you’re an affluent member of a rich country, to do as much good as you can, basically.

Robert Wiblin: Which may well involve giving away a lot of your money.

Will MacAskill: A lot of your money, or dedicating your career to doing as much good as possible. Yeah, it’s a very demanding moral view. That’s quite strictly in disagreement with common sense, even more so when you think about you’re doing this to improve the lives of distant future people, and so on.

Robert Wiblin: Are there any other counter arguments you want to flag?

Will MacAskill: I think those are by far the most compelling. There’s various forms of conflict with intuitions. One is, it doesn’t care about side constraints. Second, is it’s very demanding. A third is that it doesn’t care about equality as an intrinsic value at all.

Robert Wiblin: It doesn’t care about many other things as intrinsic value. It doesn’t care about the environment as an intrinsic value.

Will MacAskill: Doesn’t care about the environment as an intrinsic value. Lots of the… Doesn’t care about knowledge as an intrinsic value.

Robert Wiblin: Justice.

Will MacAskill: Justice, yeah. Lots of these things you can explain as instrumentally valuable. The utilitarian thinks that having an equal distribution of resources is super good for instrumental grounds because it means it’s more welfare.

Robert Wiblin: But if it didn’t lead to more welfare, then they wouldn’t care.

Will MacAskill: Yeah, exactly. Similarly with knowledge. Super important to get lots of knowledge, but important because it improves people’s lives, not for its own sake. This all comes to people thinking that utilitarianism is too thin, or arid, a conception of morality.

Robert Wiblin: Another thing is you said the fact that utilitarianism doesn’t feature personhood as a fundamental issue or a fundamental part of the universe is an argument for it, but I imagine many people would see that as an argument against it because it’s just so bizarre.

Will MacAskill: Yeah, exactly. Rawls promoted this idea of the separateness of persons. That’s his banner label for a set of criticisms which is all about saying, “No, the real problem with utilitarianism is that it treats decisions between people as isomorphic to decisions within a person, a person’s life.

Robert Wiblin: Do you think… Rawls was writing this stuff in the 50s and 60s, and I guess Derek Parfit did all of these personhood and identity cases in the 80s. in Reasons and Persons. Do you think Rawls would have been persuaded by those thought experiments?

Will MacAskill: I think ultimately, no. There’s big disagreement among philosophers on the nature of personhood. The idea that persons just don’t exist is a minority view.

Robert Wiblin: Minority view. That brings me to my next question. Philosophers just disagree about all kinds of things. They spread out over very wide range of conflicting views on almost every topic. How much should we believe anything that comes from your field?

Will MacAskill: I actually think the level of disagreement among philosophers is greatly overstated. There’s certainly some issues on which philosophers are in, really, quite remarkable agreement in a way that’s very different even from common sense. The clearest case to me is our obligations to non-human animals. 2/3 of philosophers believe it’s obligatory not to… Compare that to the number of people who are vegetarian. I think 1/3 are vegetarian. There’s a lot of people whose beliefs aren’t in congruence with their actions.

Robert Wiblin: But at least they agree in principle.

Will MacAskill: But they agree in principle. Whereas, how many would I think is proportion in society? 5% maybe. 10%.

Robert Wiblin: I saw a survey of philosophers at one point, which I think had roughly 1/3 sympathetic to consequentialism, roughly 1/3 sympathetic to deontology , and 1/3 sympathetic to virtue ethics, or something else.

Will MacAskill: Mm-hmm (affirmative).

Robert Wiblin: Pretty widespread. I suppose there’s a lot of theories that aren’t on that list. Maybe there’s a lot of things that have been believed over the years that are now rejected, and now we’re down to 3 broad categories.

Will MacAskill: Yeah, that’s true. Lots of people believe egoism, for example, which is just the view that you always ought to do whatever’s best for you. Extremely unpopular view in philosophy. A lot of people believe relativist view, which is just whatever’s right for me is what I believe to be right. What other people is right for them is what they believe to be right. Again, extremely unpopular view. There’s views that are somewhat similar that moral philosophers believe. That view is kind of… On certain practical issues there’s more agreement than there might be at the theoretical level.

Robert Wiblin: Okay. Do you want to say anything else about the issue of peer disagreement within philosophy? Your approach is just to say a lot of things might be true. Let’s think what should we do if we’re not sure.

Will MacAskill: Yeah, that’s my view. I think it’s important. I think it can be very easy to do a fake updating on the base of disagreement from peers, where you update on people who are still kind of similar to you, or you use your assumptions about what’s correct to identify who appears or not. So lots of the time I hear from, particularly, virtue ethicists or particularists. I’m just like, you could just be making noises at me for all I understand your view.

Then, when we talk about continental philosophers something goes even further. I cannot understand why you would think this. In a sense…

Robert Wiblin: I can’t even understand what you’re saying.

Will MacAskill: Sometimes, yeah. Especially for continental philosophers. That, though, could be an argument in favor of updating more. Maybe just my brain is just wired in a certain way such that I just don’t appreciate certain considerations.

Whereas, if someone just comes along and says, “Oh you should believe prioritarianism with a square root function.” I know it’s not a square root function. This is just a mistake.

Robert Wiblin: You understand the error in one case, but not in the other.

Will MacAskill: That’s right, yeah. But, I think the whole issue of peer disagreement is extremely thorny. And, it is the case that, if you were just trying to read of philosophers’ actual views from the literature, that’s very hard to do because there’s so many biases in what gets published. I trust much more what people actually think rather than what they say they think in published articles.

Robert Wiblin: What are the biases there? One of them seems to me that new philosophers have an incentive to come up with some new view, even if it’s worse than the old views, because they have to stake out some new position in order to have a career. There’s no value in saying, “Yep, we were right 200 years ago about this, and it’s obvious.”

Will MacAskill: That’s right. I think that applies to utilitarianism, for example. Nick Beckstead, who you’ve also had on the show, he originally wanted to write a PhD on arguments for utilitarianism, and got strongly discouraged from doing that because it seems like flogging a dead horse. Original arguments for an old view doesn’t seem as exciting, exactly.

Another bias is one strong argument versus many weak arguments. In philosophy, you can’t really publish an article that’s saying, “Hey, this is this view. I believe it because of these 10 okay arguments, but they all point in the same direction.” Even though, epistemically, that’s actually a much…

Robert Wiblin: Very strong ground.

Will MacAskill: Strong ground compared to the here’s this one really strong argument.
Then there’s just tons of things that you believe but on the basis of like, overall this world view… It’s similar to many weak arguments, but…

Robert Wiblin: But you can’t prove it deductively.

Will MacAskill: Yeah, that’s right. The form of the article would look very different.
Then there’s just lots of things that don’t get written about as much because it’s just harder to demonstrate academic ability and so on in the course of doing it. Like work in practical ethics, for example, it’s very hard to show incredible intellectual skills in that area compared to working on Newcomb’s problem in decision theory where you can really show off intellectual skills. Then, that means that some areas are just systematically underdeveloped.

Robert Wiblin: Okay. So, the original question was what is your most unusual philosophical view. Are there any idiosyncratic positions that you take that you wanted to bring up quickly before we move on?

Will MacAskill: Yeah. Things not to dwell on too much, but I think we should take the idea of infinite amount of value much more seriously than people currently do. I think there’s this weird thing where there’s some people in the effective altruism community that are really on board with the idea of broadly total view level of population ethics, vast amounts of people in the future – maybe it’s ten to the power of a hundred lives or something. We should really be aiming to achieve that, and basically nothing else matters apart from getting to that goal. But, the idea of infinite amounts of value in achieving that is just absurd. Who would be crazy to believe that.

Robert Wiblin: Don’t be silly, Will.

Will MacAskill: Whereas, I think all of the arguments for thinking that we should be pursuing this finite but extremely large amount of value seem to also argue in favour of trying to produce infinite amount of positive value as well. Then, that means you can go one of 2 ways. You can either say, “Oh, yeah, we should.” Maybe that leads to radically different conclusions because we’re thinking about this very different aim. Or, instead, you can go the other way and say, “Well, actually, what this shows is that we really don’t know jack about what’s morally valuable. Instead what we want to do is preserve option value, try and give ourselves time to figure out what’s morally correct.

That can still act as an argument for reducing extinction risk. It’s just buying us time to do hard work in moral philosophy. It’s a very different style of argument.

Robert Wiblin: Okay. I’m planning to have Amanda Askell on the show to talk about the infinite issues.

Will MacAskill: Oh, fantastic.

Robert Wiblin: Let’s move on from that. All right, so, we started talking about problems in academia. The bad incentives that academics have. I guess you’ve been a professor now for about 2 years.

Will MacAskill: That’s right, yeah.

Robert Wiblin: What are some of the down sides of being a professor? So many people want to become academics, right. It’s a very romanticized profession. The day to day life can actually have some serious problems.

Will MacAskill: Yeah, I think that’s totally right. I definitely think that most people who aspire to be a professor have a very different understanding of what it’s like compared to how it actually is. Certainly that was the case for me.

Most people who become philosophy professors… Firstly is just how hard it is to get that sort of position. It’s like trying to become a musician or an athlete or something. You’re doing something that a very large number of people want to do. Firstly, it’s just very good chance you’re not going to be able to find a position. A lot of it is randomness as well because there’s so many good candidates. I managed to get this position. Very large amounts of luck in terms of articles getting accepted into good journals, in terms of who was on the hiring committee and their sympathy to my project. That’s one big thing.

Secondly is what you actually end up doing. The vast majority of professors are spending most of their time doing teaching and administration… At most universities. That’s even at Oxford as well. Most tutorial fellows, which is the position I used to be on, that’s maybe 50% of your time is spent either doing teaching, including 101 and 102 teaching to first year undergraduates. Or, dealing with a lot of quite menial stuff like setting up tutorials for different students. Often things that seemingly could be done by people who aren’t academics, or by software, actually. Universities are often a decade or more behind technological state of the art. Often just boring bureaucratic meetings as well. I’m definitely part of many meetings where I could have been replaced with a sack of potatoes and it would have done as good of a job as I was there.

Universities are these huge bureaucracies, sadly that means there’s just tons of waste in the system in a way that most academics find very frustrating. Then it’s also the case that you have to play this game which is publishing articles, which has so many significant problems with it. My second major publication, the length of time between me submitting that and it getting published was 5 years. In so far as I actually think this stuff is really important. I want people to learn about this. I want people to build on this. I want people to criticize me to figure out where I’m wrong. If the lag is 5 years, I’m just not really going to learn very much.

Robert Wiblin: I was going to say it’s a bit of a shame if this article is highlighting a moral catastrophe and you’re just waiting for it to get published …5 years just sitting on the draft.

Will MacAskill: Exactly. Then, similarly, related to certain things that you can publish, certain things you can’t. Very strong incentives to only work in areas that you can really demonstrate intellectual chops. At least until you get to the point where you’re much safer in your intellectual position. Those are the ways in which…

The other thing is, again, just is the case that in academia lots of the things that you won’t have to deal with, unlike in the corporate world or something, you actually do. Things like, “How many people do you know? How wide is your academic network?” It’s not as strong a predictor of success as in other disciplines, like the corporate world and so on, but it does make a difference. People like to hire people they know are good.

Robert Wiblin: You still have to kind of work the room, no matter how smart you are.

Will MacAskill: Yeah, obviously that’s a very kind of mercenary way of putting it, but people who go to a lot of conferences and get to know a lot of people, they do have more success. Often people want to go into academia because they really find that sort of stuff makes them sick. Sadly, it does happen.

A lot of the way the life of an academic is much more constrained than you might think, but there’s tons of really good things as well. Lots of the things academics want to do, it’s just very hard. Because of the history and institutions that have been built around academia, it means you can get to a certain quality of research and thought that’s very hard otherwise. You’ve got to jump through all of these hoops like getting into a really top PhD program, getting a good PhD, submitting to these journals. Then, that means that you’ve got a way of… All of this different information you could be engaging with or different people you could be engaging with, you’ve got this filtering system such that a lot of the hard work of what the things you should really be putting your attention on. Lots of that hard work has been done for you.

My impression with intellectual research outside of academia is that you can make a ton more progress more quickly if you’re focused on something that academics aren’t focused on. So, like the question of, “What charity does the most good with your money?” Academics aren’t going to work on that question. If you want the state of the art, just don’t do that within academia. Then there’s questions that academics do work on, and often you get very high quality research on like meta-ethics, for example. When there’s an area that is suitable for academics to work on, then the highest quality thought, it seems to me, is actually going to be found within there. It’s really hard to do that…

Robert Wiblin: I guess you could try to found a new academic discipline in figuring out what the best charity is, but that’s very difficult to do, even though it would be quite valuable if you could.

Will MacAskill: Yeah. We are trying to set up a new research field of effective altruism or global priorities research, but if the field were ‘what’s the best charity,’ top academics are not going to want to go into that because it’s not a very good way of demonstrating your intellectual chops. There are interesting theoretical questions like the question of giving now versus later or population ethics or low probabilities of high amount of value. Tons of important open questions, but the ones that are going to get attention are the theoretical questions. The ones that fundamental, are going to have broader significance, and so on.

Robert Wiblin: I guess 80,000 hours is kind of filling a very similar gap as well. It’s not that we’re smarter than academics. It’s just that academics really haven’t tried to answer the concrete applied questions that we’re trying to work on.

Will MacAskill: Yeah, absolutely. You can think of it more like think tank work. The stuff that 80,000 hours is producing is really good and really important and making a lot of progress in this area, it just doesn’t have the right fit for academia. Not really. I kind of tried this. I did write an article on the case for Earning to Give, which did get published. But, it’s by far the publication that’s the least prestigious in terms of the venue that it got compared to my other work. And, I did have to shoehorn a lot of the ideas into a way that feels academically or philosophically appropriate. It is very constraining in that regard.

And sometimes that even applies to things that you might think philosophers should work on. What are the actual concrete implications of utilitarianism? Really trying to work them through. Remarkably that just really hadn’t been done. Peter Singer did it a bit. Actually, there was tons tons more that he could have said, whereas Felicifia, this utilitarianism forum, which is a bunch of smart people who were really concerned about this question… I actually think was just better than the state of the art in…

Robert Wiblin: Even though a lot of them were just undergraduates.

Will MacAskill: Yeah, absolutely. Again, the question is just is there low hanging fruit.

Robert Wiblin: You mentioned earlier that universities are often a decade or two, or maybe 10, behind the state of the art in corporate efficiency…

Will MacAskill: Yeah the incentives are just misaligned all over the place.

Robert Wiblin: I guess the University of Oxford’s not going to go out of business from competition.

Will MacAskill: Yeah, exactly.

Robert Wiblin: Its feet aren’t held to the fire to have good operations.

Will MacAskill: And another thing is the admissions process. I’ve acted as an interviewer twice for Oxford’s undergraduate admissions. It’s a real shame that that isn’t more evidence based. There’s various factors to consider. One is school performance. Second is standardized tests that we ask the students to send. Thirdly is an interview. There’s actually data. One thing Oxford does is great. It has gathered data on which of these things predict performance in their final exams. They give interviewers the algorithm. It turns out that the relative weight that you should give to the test versus the interview is 8 to 1. The tests are vastly better at predicting who does well than the interview.

But, then what happens in practice? All the decisions are made immediately after you have interviewed people. That means that people have just had this huge bias of seeing someone right in front of them, including all of the potential stereotype biases that involves. It’s extremely notable how much better at interview people from elite private schools are than public schools. In a way that’s very unsurprising because they have had those sorts of conversations over dinner every night of their lives, whereas people from comprehensive schools have often never had an intellectual conversation like the one they…

Robert Wiblin: Even if they could do a good test or write an essay about it.

Will MacAskill: Even if they could do a good test, exactly. Then, the decisions are made immediately afterwards. Many people don’t even know that there’s this algorithm because the idea is just buried in vast amounts of other…

Robert Wiblin: I suppose many of them aren’t statisticians and appreciate how this functions.

Will MacAskill: Yeah, exactly. Or they aren’t aware of just how weak the evidence is for the predictive value of interviews. What I did was just put it in the algorithm and came up with a ranking, and accorded the interview the 1/9 weight that it deserved. The second time we did it, when I was the primary interviewer for philosophy, I just stood by that ranking and held my ground. Maybe at the border you can tweak it in various ways. Luckily that is how we made offers, in fact.

Robert Wiblin: Oh. So you managed to convince your colleagues.

Will MacAskill: I did eventually manage to convince my colleagues. But, it was a several hour conversation about this, where the temptation to say, “Look, I met this person and they seemed like a genius to me.”

Robert Wiblin: In the 20 minute conversation I had.

Will MacAskill: In the 20 minute conversation I had. Who cares about their grades over the basis of 2 years of work, or the test that’s actually got amazing predictive power. Who gains here? It’s incredibly costly for the academics and the universities.

Robert Wiblin: Oxford is also criticized, probably rightly, for not have a sufficiently diverse range of applicants, or people who are admitted, tending to take people from the upper class. It seems like changing the algorithm, or changing the process here, would be a really easy way to do that and get smarter students.

Will MacAskill: That’s right. You could get rid of admissions interviews. I think we would improve the quality of students that came to Oxford. I think it would be more diverse, and it would save vast amounts of time for academics. This is a good example of just being beholden to tradition. The terribly difficult Oxford interview, Cambridge interview, is part of what creates this elite, this mystique around the university.

Robert Wiblin: We’ve done it this way for 400 years, since before we even had the mathematical tools to do a better job, so why not just continue.

Will MacAskill: Exactly, yeah. So there’s a huge amount of inertia.

Robert Wiblin: A couple of years ago you wrote an article for 80,000 hours kind of discouraging people from going into philosophy, but then you’ve become a philosopher yourself. Would you want to revise what you had to say there at all?

Will MacAskill: I actually, I do want to revise it a little bit. Not because of my own case. I think it’s a terrible idea to make general recommendations on the basis of…

Robert Wiblin: N = 1

Will MacAskill: N = 1. Especially when the only reason you’re in a position to make recommendations like this is that you happened to have been successful. Actually, I could go on a long rant about this and…

Robert Wiblin: It’s like you’re getting careers advice from the world’s best marathon runner, who’s like, “Yes, anyone can do it. Just become a marathon runner. It’ll be great.”

Will MacAskill: Yeah, exactly. And this is huge in career’s advice. I’m in this book, “Tools of Titans,” by Tim Ferris, which is all about sampling on the dependent variable and looking at incredibly successful people and seeing what they did.

Robert Wiblin: And, I get they took very big risks, isn’t that right?

Will MacAskill: Yeah. The Airbnb co-founders, they kind of boast about how they had these baseball cards binders full of credit cards where they had taken out huge amounts of debt. But, they were lucky and they pushed ahead and then were incredibly successful.

Robert Wiblin: It’s basically the equivalent of trying to decide whether it’s good to go to the casino by only talking to people who won the roulette table.

Will MacAskill: Absolutely, yeah.

Robert Wiblin: They would say, “You should bet as much as you can. Bet all of it on a single round.”

Will MacAskill: That’s right. Because that’s what I did and that was incredibly successful. No, what I would love to see is a podcast called stories of failure, where they take people who looked propitious, like they were going to have a big success, and then it just crashed and burned. Then, really identifying why that happened. I think it would involve lessons like, “Don’t break the law.” That’s my first lesson. To be successful don’t break the law. That’s a really dumb idea.

Robert Wiblin: Probably don’t try heron.

Will MacAskill: Right. Don’t take out loads of debt. This is not the stuff you get if you just look at the most successful…

That’s right. I have been super lucky. There’s no doubt in that. I can talk about many instances where I’ve been very lucky in terms of getting the position that I got. I do think… In the article, what I said was 2 things. One, I think the value of doing philosophy, in particular moral philosophy, is extremely high. If you look at the track record and look at the mean, kind of contribution of philosophers rather than the median. I think the median’s close to zero. The mean is extremely high. You’ve got Aristotle. You’ve got Mill, Bentham. We’re working out what we ought to do as a society. Given how incredibly important that question is it gets almost no attention in terms of use of resources.

The question, though, is just, “Can you become a philosopher?” There, the prospects are kind of really hard. Even if you get into a top philosophy program it’s very hard to then move into a research position. I think since then the case is somewhat better, mainly because effective altruism ideas have become… If this is what you want to work on. Those ideas are becoming more mainstream. We’re more able to bring in funding. Organizations like the Future of Humanity Institute or The Global Priorities Institute. There’s a greater potential to have nonstandard academic routes if you’re working on these topics, which are very important because there are people who see the value of doing this sort of research. That’s at least more compelling.

The second thing that’s different now as well is in terms of a ratio of philosophers to other people in the… Where effective altruism we were kind of… Talking about being non-diverse, we were mainly philosophers.

Robert Wiblin: 100% philosophers.

Will MacAskill: Yeah, that’s right.

Robert Wiblin: Maybe we should branch out into having an economist.

Will MacAskill: Yeah, exactly. Whereas, what’s happened is that the number of philosophers has grown very slowly. Very slowly since a couple of years ago. Really took off in philosophy, and then it kind of saturated a little bit. Whereas effective altruism in general has grown a lot. I actually think marginal philosophers have a ton to contribute.

Robert Wiblin: I suppose another thing is it seems like even if you can’t get a good philosophy academic job, there’s a really strong track record of people who’ve studied philosophy and done pretty well going out and doing really useful research in other areas.

Will MacAskill: Yeah.

Robert Wiblin: Maybe not in academia, but in a think tank or in a foundation or something like that.

Will MacAskill: Yeah, or open philanthropy project has just hired a number of philosophers. Then, the question is, “Is it still the best thing to go into?” I think for someone who could go into either philosophy or economics, even if it took an extra year because you did a conversion course or something, it still does seem that the case for going into philosophy would just be if you’re like, “I am a philosopher. This is what I think about all the time. When I’m taking time off what I want to do is read philosophy. I don’t have much interest in other disciplines.” Then, still go into philosophy. If you’re kind of more unsure either way, it’s hard not to make the case for economics. The reason is 3-fold.

One is just there’s way more jobs in economics. The ratio between academic jobs and PhD is much closer to parity. The second is that there’s much better options. You can go into government, go into think tanks, and so on. Much more demand there. Thirdly, I do just think you… Thirdly is there are even fewer economists than there are philosophers in EA, and I think we’d really benefit from that a lot.

I do just think the skills you learn are better. I think philosophical skills are really important, but…

Robert Wiblin: They’re a bit fragile, I think. Economists learn to kind of weigh evidence from many different sources, do a bit of theory, a bit of empirical work, which is more what you’re doing in day to day life.

Will MacAskill: Yeah, that’s right. And, finally, it’s easier to switch from economics to philosophy. My old supervisor was an economist originally and then became a philosopher, but I don’t see how it could be possible to move from philosophy in…

Robert Wiblin: Okay, so, let’s move on from academia. I think it might be great to get you on next year to talk about how you can go about trying to become an academic. Perhaps even a public intellectual, getting into the media, writing books, that kind of thing.

Will MacAskill: Yeah, sure.

Robert Wiblin: That’s kind of a big topic in itself. Let’s talk a bit about the EA community. You’ve been involved, I guess, for 6 years, kind of since the beginning. You’ve seen how it’s changed over the years. What kinds of things would you like to see it do differently? Do you think we’ve gone down any bad directions that we need to correct?

Will MacAskill: Yeah. I think there’s a few things. The biggest things it seems to be at the moment is this discrepancy or lag between… On the one hand there’s the most core people in effective altruism, often people working it, organizations like Open Philanthropy and 80,000 Hours and Center for Effective Altruism, and people very close in there. Then there’s a kind of wider effective altruism community that may be somewhat engaged in pursuing effective altruism in their own lives. Then, even further, again, there’s people external to the community then see it and think of what we’re doing.

Robert Wiblin: People who have just read 1 or 2 news articles about it.

Will MacAskill: Yeah. Exactly. There’s a really big spread here. People in the core, very striking amount of agreement on a couple of things. One is taking long termism perspective very seriously, where that means that most of the value of the actions that you do today are how they impact the value of the very long run of human civilization. That means that doing things like trying to reduce existential risks, trying to promote a really flourishing civilization with a particular focus on biological risks and artificial intelligence. Basically, there’s very wide agreement on that within the core.

Robert Wiblin: Yeah, we recently did a survey of people who are a core part of effective altruism. It seemed like it was about 80 or 90% agreed with that view.

Will MacAskill: Yeah. That’s pretty notable. Now let’s just go all the way to how, say, the media perceives EA, which is almost exactly the opposite, remarkably. That form of view, it’s informed by a lot of theoretical considerations, philosophical arguments. The media perception of EA, which I’m sure I helped to contribute to, but many years ago now, is EA is about only looking at short term benefits that you can measure very clearly using randomized controlled trials, and it’s about earning as much as possible in order to donate to those sorts of organizations. Even though, again, now the kind of core is placing a lot more importance on doing direct work rather than earning to give. Open Philanthropy is advising a 10 billion dollar foundation. We’ve done very very well at raising money and have this funding overhang.

As you move into the core, you’ve got this spectrum of how much people’s views are similar to media’s views. EA’s about earning to give to donate to RCT backed charities and doesn’t care about long run at all.

Robert Wiblin: Or about politics, or changing laws, or changing cultural norms.

Will MacAskill: Politics is another one where, again, people in the core community are very serious about the importance of ensuring the political institutions work really well and that political actions are sensible and positive from a very long term perspective. Again, the stereotype that people see is, “Oh, EA doesn’t care about that at all because it’s not…”

Robert Wiblin: To be honest, I used to read articles in the mass media about effective altruism, and I kind of had to stop just for my own health. It was bad for my blood pressure to read people condemning the thing that I’m a part of on the basis of their understanding it to be the complete opposite of what it actually is. It’s infuriating.

Will MacAskill: It’s really frustrating. It’s kind of half inevitable where messages are like light, or something. The people in me media may be looking back, what’s now 6 years or something, really the views we were championing early on. Partly I think it’s just laziness in terms of not engaging with what we actually think. There still is this lag. There’s tons of views that we haven’t codified, in-fighting and so on. But, we’ve definitely been making a bunch of progress. If someone were actually to speak with core people and actually get up to speed with the articles like 80,000 Hours have been releasing, which have been excellent, they would see that this stereotype is badly…

Robert Wiblin: To give a sense of how bad it can be, there’s someone who wrote years ago that we were too similar to Charity Navigator, this website which just looks at whether charities are fraudulent or not.

Will MacAskill: That was just never…

Robert Wiblin: We were like mortal enemies at the time.

Will MacAskill: Exactly, that’s right.

Robert Wiblin: We had written furious articles criticizing one another.

Will MacAskill: Exactly.

Robert Wiblin: That’s the level of engagement that you often get. They don’t even know that we’re like the antithesis of this other thing that they’re saying that we’re exactly the same as.

Will MacAskill: Yeah, yeah. That’s right. That kind of percolates into the wider community as well, where… One of the reasons we set up effective altruism funds, at the Center for Effective Altruism, was so that we could give a more accurate representation of the sorts of causes that the really core people think are most important, which isn’t just taking the long run future and trying to ensure that goes as well as possible, and farm animal welfare as well.

Robert Wiblin: Improving people’s decision making ability, especially in government or important institutions, that kind of thing.

Will MacAskill: Yeah. Absolutely. That’s the kind of second side, which is, again I think people see the long run future people, and again they think of this loony, “The only thing that matters is whatever tiny probability you can increase of ten to the hundred lives in the future, that’s the only thing we should be working on.” That’s again, just not really the view of the core people. Which is much more like, “Yes, we think the long run future is very important. Don’t be crazy about this. We’ve got tons of uncertainty. Here are the ways we could be wrong. These are the things we should be exploring.”

I’m very interested in what Nick Beckstead called broad existential risk reduction. That means general societal improvements, like improving voting systems or the competency of people who get into positions of power, or ability to coordinate, that look good across a very wide array of outcomes for the future of civilization. Such that, looks like a really good thing, even if we’re done a terrible job now of predicting what technologies are going to be important over the coming decade. That’s just a very different tone and spirit than what is this straw man of the future.

Robert Wiblin: That’s how the media is misunderstanding us. What’s kind of been our biggest mistake ourselves?

Will MacAskill: Yeah. In my own case, when I think of the big mistakes, one I think was aggressive marketing around Earning to Give. I think that was a bit childish almost, is how I think about that. We really suffered that cost. It lingered.

Robert Wiblin: People to this day think that 80,000 Hours is all about Earning to Give, whereas we’ve explicitly written articles saying we think most people shouldn’t earn to give.

Will MacAskill: That’s right. That’s from being more contrarian in our initial mark, which is now 6 years ago. I think not appreciating just how long certain bad messages can stick around for.

Robert Wiblin: Maybe I’ll still be 50 and people will be telling me that 80,000 hours things everyone should Earn To Give. It will never end. We’ll be punished forever.
Will MacAskill: Yeah, exactly. From my own perspective, there were certain things involved coordinating with other groups in the early days that had different geographies. I wish I’d talked to Give Well so much more in the early days of Giving What We Can. They were all the way in New York at the time, and that just created this unnecessary division between… Certainly with other groups as well.

Robert Wiblin: It means that the only time you talk is when there’s a real issue, when someone’s frustrated someone else.

Will MacAskill: Exactly. Yeah. Especially because, we were young. I was 23. There were some things, like this famous controversy over the figures Giving What We Can were using in terms of the cost to avert a DALY from some programs. Give Well did tremendous research really digging into that and actually showing that, surprisingly, the academic estimates were really based on very flimsy estimates. Little mistakes were being made. Giving What We Can was incredibly sluggish to respond to that, mainly because we had a poor understanding. That wasn’t malice or any to mislead people, we just like, “Oh, we don’t know how to deal with it.”

I think also, maybe wishing, something worth still changing is the messaging to be quicker about ensuring the messaging we’re broadcasting presents the views of at least the cutting edge of EA. Perhaps we could be quicker at updating that so that there’s less of a lag.

Robert Wiblin: Speaking of the cutting edge, there’s a bunch of touchy issues that EA can end up delving into. An example that jumps to mind, many people involved in effective altruism are concerned with the suffering of animals in the wilderness, and end up saying, “Maybe the wilderness isn’t as good as it’s cut out to be. In fact, it could be very bad, at least, for some species.”

Will MacAskill: Yep.

Robert Wiblin: If you try to put that into the mass media, then you get very acerbic push back from that. A lot of people place a lot of intrinsic value on nature. You’re saying something they think is sacred is actually not good. There’s other examples like that. Whenever you’re seriously engaging in philosophy, sometimes you’re going to reach counter-intuitive conclusions. Is this something that we should embrace? Should we talk about these controversial interesting philosophical cases a lot? Peter Singer has done this throughout his career. He’s caught a bit of controversy here and there.

Will MacAskill: Yeah. I think there’s 2 things to say. One is I think is absolutely crucial. First, to be able to responsibly explore all sorts of idea. Precisely because, again, looking into the past… The idea that homosexuality should be decriminalized. What a monster… Not like Bentham saying that… Not only is it like, “Weird guy,” but actually would have gotten tons of vitriol as a result.

Robert Wiblin: He could have gone to prison.

Will MacAskill: He could have gone to prison, yeah perhaps. Extremely important to be able… We have a moral responsibility to explore ideas, even if they seem weird, even if they seem to be toxic. That doesn’t mean, just like, anything goes. We should shout out whatever contrarian things we like. It definitely doesn’t mean we should take joy in contrarian ideas. What I think is we should have a floating parameter of how much care do you take about this message.

If you’ve got something that’s just not very important and potentially very controversial, potentially offensive even to people, then there’s no need to talk about it. If it is important, though, is important to talk about it, in which case just take a lot of care over how you’re saying things. Think about how people could react and respond. Unfortunately you can even do this… I think Jeff McMahan’s piece on wild animal suffering was actually… I thought it was very careful, but people still… He went at great pains to say, “I’m not saying at all we should do anything about this. I’m not saying we should intervene in nature at all. We’ve got no idea what the consequences could be.” He said that over and over again. But still it was the case that got huge amounts of flack for saying, “You’re saying we should just destroy nature or kill all predators, or something,” showing they hadn’t even read the article.

It’s hard to do, but you can at least try your best. I think it is really important to do that. If you’re being a contrarian, you’re taking delight in the fact that people are getting riled up. You’re doing a disservice to those really important ideas.

Robert Wiblin: I think effective altruism tends to, unfortunately, attract people who love controversial debate and exploring controversial ideas.

Will MacAskill: Yeah.

Robert Wiblin: I was like that when I was younger. You don’t say.

Will MacAskill: Would have never seen it.

Robert Wiblin: I think the thing that’s really bad is when you find what might be an important idea that’s kind of controversial, and then for attention or for the pleasure of pointing out how people are wrong you make it seem more controversial than it is. And, you’re very glib about it and don’t even really explain and justify the view, which is something that I’m guilty of, especially when I was younger.

Will MacAskill: Yeah.

Robert Wiblin: It’s very harmful and mean spirited, I think. It’s very immature.

Will MacAskill: Very harmful. As an example, take the arguments for the importance of reducing extinction risk. Many people in EA were very slow to realize the power of those arguments because they’d been presented in an unduly contrarian way. Because they’d been presented as, “Well, even if the probability is 1 in a million that you can do anything about this, then because the amount of value is so great you should still do it.” Look, that’s just, for most people, not a very compelling argument at all. The other arguments, they’re much better. And so, you’ve done a disservice to this very important idea.

Robert Wiblin: Are there any other problems or mistakes being made by the EA community that you think we need to address pretty fast?

Will MacAskill: Yeah, I think one thing which EA’s taken some flack on but that I do think is important is diversity. 2 types. One is racial and gender diversity. Second is epistemic diversity. I think both are very important for different reasons. The first is just in terms of, we want this community to be big and influential because we think these ideas are really important. In my own case, if I thought, “Oh, there’s this important. These ideas are really exciting.” And then I go and everyone’s black and referring to cultural references that I don’t understand, and so on. Just human psychology’s going to make it natural that it’s going to be, you’re going to feel like, oh this is less my people. Maybe I don’t fit. I feel unwelcome, and so on.

Even more extreme than the other way around. I do think this is something we should be aiming towards. I think it’s tricky in some ways, especially because effective altruism is, in significant part, What to do if you’re in a position of privilege. It’s no coincidence that we’re targeting people who have tremendous resources that they don’t need that could be used to do a ton of good.

Robert Wiblin: They’re the people who have the greatest obligation to do this stuff because they don’t need to look after themselves anymore.

Will MacAskill: That’s right. Absolutely. We’re not targeting people who are living on the poverty line in rich countries. They’ve got their own shit to deal with. I think the key thing, and the most important thing, which I think is good anyway, is ensuring that EA’s community is very welcoming and very friendly to people of diverse backgrounds. That means limiting the use of jargon, not letting that become a shibboleth or password for entering the community. Ensuring that if someone comes and says something, like a mistake, people don’t jump on them and criticize them a ton, but a much more friendly… Also just acknowledging uncertainty a lot more again can help for these aims quite a bit more.

Robert Wiblin: Just generally being welcome and not being arrogant, which is an easy trap to fall into.

Will MacAskill: Yeah, absolutely.

Robert Wiblin: Especially if you’re hanging out with people who tend to agree with you.

Will MacAskill: Yeah, exactly. I’m trying to really avoid both cultural and epistemic group…

Robert Wiblin: Finally, and I know you’ve got to go in a minute, what’s the best argument against effective altruism?

Will MacAskill: Great. I think there are a couple of different ways you can criticize EA. One is just saying the theory is wrong. The second is saying, “Theoretically it makes sense, but in practice there’s some big mistakes.”

On the theory being wrong, the most compelling way in which it could be badly wrong is if we’re under obligations of justice that are so strong that thinking about beneficent or doing good is just kind of wrong. Instead what we ought to be doing is just rectifying injustice. The analogy is, suppose that you’re very rich and that’s because you stole a million dollars. Then, I say, well, what you ought to be thinking about is how can you do as much good as possible with that million dollars? Many moral views would say that’s not the right way of thinking. Instead you ought to be rectifying that injustice. You ought to give back that money.

Possibly, at least, the same is true for us in rich countries. We inherited this wealth. We don’t own it. The causal history of that wealth has tons of really dodgy aspects to it – involvement in colonialism — and so on. Again, we ought to be using our resources as much as we can, but to rectify that injustice rather than to do good.

Now, I take that perspective pretty seriously. I’ve tried to think about it as much as I can. It doesn’t seem to me personally that that actually comes apart very significantly. One of the injustices is the horrific suffering we’re inflicting on farm animals. One of the injustices is that we’re kind of playing Russian roulette with the future of human civilization.

Robert Wiblin: That could kill lots of other people who aren’t responsible for it.

Will MacAskill: That could kill lots of other people, exactly, that’s right. Or, just injustice, we’re just focusing on the global poor. You can’t literally give money back to those who we stole it from because they’re not around. What instead can you do? It’s at least a safe option to say, “Well, I’m going to do it in whatever way will benefit.” Health, and… You might actually say, okay, you have extra worries about paternalism there. Again, health and cash transfers look pretty good on anti-paternalistic grounds.

Robert Wiblin: You might not know exactly who was treated most unjustly, but it’s a reasonable bet that it’s the people who are worst off.

Will MacAskill: Yeah, exactly. That’s right. That’s at least safe.

The second is, well, EA in practice. A couple of things here. One is just we’re trying to do the most good, and that’s meant to be pretty theory neutral. Then, as it happens, I think many people in EA at least are very sympathetic to consequentialism, or at least have the brains that’s sympathetic to consequentialism, even if they take other moral viewpoints very seriously. Clear that could introduce some additional biases in terms of how we’re reasoning about things. There the idea would be maybe if you have this thick non-consequentialist view or a very different way of reasoning, you’d end up coming to a very different conclusion. I think we don’t want to pretend that we’re saying, “Any way of doing the most good will therefore lead to this particular set of recommendations.” Instead, sometimes being explicit like, “What we’d love is people with very different value systems to try and go through this entire process.”

It’s a bit of a shame that the criticisms, so far, from people who do find the whole approach weird, have instead been from a distance lobbing grenades.

Robert Wiblin: Whereas we might like them to say, “I don’t agree, and here’s what you should think instead.”

Will MacAskill: Yeah, exactly. I’d be so interested in that. Even when I put on my non-consequentialist hat and try and reason through, it seems to me like the things we recommend are still pretty good.

Robert Wiblin: Maybe you’re just kidding yourself.

Will MacAskill: Yeah, maybe I’m still biased in certain ways. I do weigh that, for example, in wild animal suffering, or something. It was the case that you were really interfering in nature, and that really conflicts with an environmentalist world view. I would think that’s a reason against doing it because it’s not morally cooperative with other sets of values, and the moral uncertainty reason against.

Robert Wiblin: And a reason to delay because it’s irreversible.

Will MacAskill: Yeah, absolutely. Maybe the second argument could be, and we have been moving already in this direction. Effective altruism definitely started off with a focus on charity and individual donation. The argument could have been, we’re talking to relatively affluent people, people with potentially a lot of influence, the amount of influence you can have in expected value terms by just forgetting about this charity stuff. Just trying to influence what governments do is just so much greater that even though you can’t get reliable feedback and so on, even though it’s going to be a low probability, the value of that is just so much greater, and so therefore… Going around saying, “Hey, give to charity, all this good stuff you can do.” Maybe that was actually bad because it takes away attention…

Again, I don’t think that is true because I think we’ve managed to create this community of people who are really dedicated about improving the world and are now thinking about how very seriously and taking action on how can you do that with respect to improving… But it’s…

Robert Wiblin: I guess another possibility is we’ve just chosen the wrong problems to try to solve. I guess that’s the idea of Cause-X that you kind of promoted.

Will MacAskill: Cause-X. That’s right. I can easily get myself into a state where I think it’s overwhelmingly likely that we’ve chosen the wrong problems to focus on. Again, because I’m just so aware of how impoverished our both moral and empirical knowledge is. The idea of Cause-X is some cause such that we think it’s just completely way better. Maybe we haven’t even conceptualized it yet. Maybe it’s something that we say and dismissed. That we just think is a far far greater moral importance than even the top causes we promote just now. And I think that one thing we should really be trying to do as a community is try and figure out what that might be.

Robert Wiblin: Yeah, maybe next time we can survey some of the options there. See if any of them are plausible.

Will MacAskill: Love to do that.

Robert Wiblin: My guest today has been Will MacAskill.

Will MacAskill: Thank you.

Robert Wiblin: Thanks for coming on the 80,000 Hours podcast Will.

Will MacAskill: Thanks so much.

Robert Wiblin: If you found that fun, please post the episode to facebook, twitter, instagram or the blockchain. Or message a friend letting them know about the show and recommending they check it out.

Keiran Harris helped edit and produce today’s show.

Thanks so much, talk to you next week.

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs and policymakers - to analyse the case for working on different issues, and provide concrete ways to help.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected]

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.