In 2003, the United States chose to invade Iraq. Most now agree this decision was deeply flawed, costing trillions of dollars and hundreds of thousands of lives.

Exactly what went wrong here is a contested and controversial issue. At best, the decision-making process severely lacked rigour, and at worst, it was heavily biased.

The government justified the invasion thanks to the intelligence community’s claim that it was “highly probable” that Iraq possessed Weapons of Mass Destruction (WMD) – but this statement was ambiguous. Policymakers took that to indicate near-100% certainty, and made decisions accordingly.1 But “highly probable” could easily also be interpreted as 80% certainty, or 70% – carrying very different practical implications. Those involved didn’t really think through the relevant probabilities, or consider how likely the estimates were to be wrong, or the implications if they were.

Others have suggested that the US had already decided to invade Iraq, and that this decision influenced intelligence collection – not the other way around. This a particularly extreme example of what’s known as motivated reasoning – a tendency to reason in ways that support whatever conclusion one wants to be true.

The call to invade hinged on the subjective impressions of a few key people – subjective impressions that later turned out to be wrong, made by people with complex motivations.

What if you could help prevent similar mistakes in the future?

When we think about doing good in the world we usually think about solving specific problems, and doing so better than existing institutions and organisations. But you could also improve the world in a different way: by making it easier for key institutions and decision-makers to solve problems. This might involve helping people avoid common thinking errors, better evaluate expertise, or make more accurate predictions. It might also mean finding ways to change the incentives of big organisations to make it easier to do all these things.

One advantage of this approach is that, if successful, it could enable humanity to better tackle many different problems – including those we haven’t even noticed yet.

In this profile, we cover some of the different ways to help important institutions have a much greater positive impact through improved decision making.

Get free, one-on-one career advice

We’ve helped hundreds of people compare between their options, get introductions, and jobs important for the the long-run future. Find out if our coaching can help you:

Read more

Summary

Governments and other important institutions frequently have to make complex, high-stakes decisions based on the judgement calls of just a handful of people.

There’s reason to believe that human judgements can be flawed in a number of ways, but can be substantially improved using more systematic processes and techniques. Improving the quality of decision making in important institutions could improve our ability to solve almost all other problems. It could also help society’s ability to identify “unknown unknowns” – problems we haven’t even thought of yet – and to mitigate all global catastrophic risks, which we believe are really important.

There are very few people explicitly trying to improve the decisions of important institutions, which suggests extra work could be particularly valuable.

This seems like a very promising option if you have a strong personal fit for the kind of research required to develop new ways of improving decision making, or if you’re well-placed to work in influential institutions and test out what we already know. It’s also a good option if you’re currently unsure about what specific problems are most pressing, since improved decision making can be applied to almost any area.

Our overall view

Recommended

This is among the most pressing problems to work on.

Scale  

We think work to improve institutional decisionmaking has the potential for a large positive impact, by potentially leading to much more effective allocation of resources by governments, faster progress on some of the world’s most pressing problems, and reduced risks of conflict or global catastrophe via better intelligence. We estimate that making institutional decisionmaking near-optimal would increase the expected value of the future by between 0.1% and 1%.

Neglectedness  

This issue is moderately neglected. Current spending is unknown. There are ~100–1,000 people working on it full time, depending on how you count. A much larger number of researchers and consultancies work on improving decisionmaking broadly, but relatively few focus on robustly testing the most promising techniques, or implementing proven strategies in the highest leverage areas.

Solvability  

Making progress on improving institutional decisionmaking seems moderately tractable. There are already techniques that we have strong evidence can improve decisionmaking, and past track records suggest more research funding directed to the best researchers in this area could yield additional insights quite quickly. However, it’s currently unclear how easy it will be to get improved decisionmaking practices implemented in crucial institutions, and this second step could turn out to be a large challenge. We expect that doubling the effort directed toward optimising institutional decisionmaking would take us around 1% of the way there.

Profile depth

Exploratory 

This is one of many profiles we've written to help people find the most pressing problems they can solve with their careers. Learn more about how we compare different problems, see how we try to score them numerically, and see how this problem compares to the others we've considered so far.

What is this problem?

Our ability to solve problems in the world relies heavily on our ability to make high-quality decisions. We need to be able identify what problems to work on, to understand what factors contribute towards these problems, to predict which of our actions will have the desired outcomes, and to respond to feedback and change our minds.

There’s plenty of reason to think that our decision making competence is currently less-than-perfect, as psychology research over the past few decades has documented a whole host of cognitive biases affecting judgements and decisions.

For example, when we try to judge our chances of success, we focus too much on all the reasons why our case will be different from average: despite the fact most startups fail, most prospective entrepreneurs are convinced they will be the unusual case that succeeds.2

We’re also often overconfident in our predictions3 – it’s been argued that unwarranted confidence contributed to the explosion of the space shuttle Challenger, for example, where NASA overruled the safety concerns expressed by an engineer.4

Many of the most important problems in the world are incredibly complicated, requiring an understanding of complex interrelated systems, an ability to predict the outcome of different actions, and to balance competing considerations. That means there’s all the more room for errors in judgement to slip in. Even experts in political forecasting often do worse than simple actuarial predictions, when estimating the probabilities of events up to 5 years in the future.5 The organisations best placed to solve the world’s most important problems are also often highly bureaucratic, meaning that decision-makers face many constraints and competing incentives, not always aligned with better decision-making.

We think that improving the decision-making competence of key institutions may be particularly crucial, as the risks we face as a society are rapidly growing.

With technological developments in nuclear weapons, autonomous weapons, bioengineering, and artificial intelligence, our destructive power is quickly increasing. Crises resulting from war, malicious actors, or even accidents could claim billions of lives or more.6

It’s not clear that individual decision-making, or the structure of key institutions, has evolved at anything near the pace needed to manage these potential crises – our institutions and decision-making processes look pretty similar to those that failed in the first and second world wars, and yet the worst-case scenarios they need to mitigate are several orders of magnitude larger.

But there’s some good news. Research is beginning to focus on techniques to improve human judgement and decision-making. Researchers are studying how to improve our ability to make predictions about the future, how to better think probabilistically, and how to think about complex problems in a more structured way.

It seems like developing and applying strategies that improve human judgement and decision-making could be very valuable – particularly if focused on institutions working on particularly important problems, and combined with a thorough understanding of how such institutions operate.

Why work on this problem?

Improving decision-making could help us to solve almost all other problems

Taxes are pretty boring. So most people in the U.K. could be forgiven for not noticing when the British government’s Behavioural Insights Team (BIT) started running some simple randomised control trials (RCTs) to increase tax repayments in 2010. But these simple trials had a big effect – bringing forward over £210 million of extra revenue for the government in just a year.7

This first trial helped BIT to make the case for more evidence-based policy – now, RCTs are being used to improve the quality of decision-making across all policy areas. Seven years after the first tax trial, BIT have run over 300 RCTs across all areas of policy (including crime prevention, giving and social action, and counter-extremism).8

This is a good example of how improving judgement and decision-making improves our ability to solve almost all other problems. If we could improve the judgement of government officials facing high-stakes decisions – reducing their susceptibility to various biases, or developing better methods of aggregating expertise – this could have positive knock-on effects across a huge range of domains. For example, it could just as well improve our ability to avert threats like a nuclear crisis, as help us allocate scarce resources towards the most effective interventions in education and healthcare.

The potential for improvements in decision-making to impact a broad range of important issues means it’s a particularly good approach to focus on if you’re uncertain about what problems are most important.

There are several ways to make progress on improving decision-making

Good Judgement Inc runs prediction contests where participants compete to forecast the future.

Research so far has made some progress identifying techniques that reliably improve judgements and decision-making, and at least where there is good evidence, there does seem to be growing interest in getting techniques implemented in practice.

Philip Tetlock’s work on forecasting,9 for example, has identified a number of ways to improve the accuracy of predictions on real-world events, which have been tested in large-scale RCTs. A separate research programme on prediction markets suggests getting people to “bet” on their predictions can increase the accuracy of political forecasts, corporate predictions, and statistical weather forecasts.10

(Incidentally we interviewed Tetlock about his research, when to trust experts and his career advice for people working on improving institutional decision-making on our podcast.)

With increasingly solid evidence for these effects, getting them practically implemented seems achievable – there is already interest in forecasting techniques in the intelligence community, for example.11 One promising next step here would be to take findings that we have solid evidence for, and run smaller-scale pilots in more specific contexts (i.e. specific organisations or parts of government.)

For instance, people’s judgements can be dramatically overconfident, leading to poorly-informed decisions – unjustified confidence in Iraq’s possession of WMDs had dramatic consequences, as we’ve seen. Part of the problem here is that people aren’t very well calibrated – our brains don’t seem to have a very good intuitive sense of what it means so say something is, say, “70% likely”, and as a consequence, the statements we make with 70% confidence turn out to be true much less than 70% of the time.

But there’s some evidence that it’s possible to improve people’s calibration through training. Hubbard Decision Research have trained over 1,000 people in calibration and found that 80% of participants were ideally calibrated after five exercises.12

More research here – particularly larger-scale RCTs, and studies that look at the effects of calibration training on real-world judgements (rather than just trivia questions) – could help strengthen the case for implementing these techniques in practice.

Other techniques that might be promising include structured analytic techniques (SATs)13 for reducing biases in judgement, and the Delphi method14 for aggregating different viewpoints – we discuss these and other promising approaches in more detail later.

Work in this area is relatively neglected

Work on “improving decision-making” very broadly isn’t all that neglected. There are a lot of people, in both industry and academia, trying out different techniques to improve decision-making. The British government spends a lot of money on management consulting15, for example, and there are researchers working on questions related to decision-making at all major universities across a range of different disciplines: including psychology, economics, business, marketing, and political science.16 However, there seems to be very little work focused on rigorously testing different techniques to get strong evidence of effectiveness, or putting the best-proven techniques into practice in the most influential institutions.

For example, there has been a lot of work testing and using “scenario analysis” techniques, with thousands of papers discussing various methods of correcting for overconfidence by considering a wider variety of possible scenarios. But it wasn’t until 2005 that someone published an experiment actually testing whether scenario analysis improved prediction accuracy – and as it turned out, it didn’t.17 This illustrates how, without good evidence on the effectiveness of different techniques, a lot of well-intentioned effort on “improving decision making” might be wasted.

Improving decision-making also seems more neglected than other ways of trying to “improve the system”, such as education, suggesting this work is more effective. People often argue for investing in education, or for certain kinds of political reform, for similar reasons we’ve given here: because these things will help us better tackle all kinds of problems.

For instance, the US government spends around 4.6% of GDP on education (800 billion dollars),18 and in a survey of the top 100 US foundations by GiveWell US education accounted for 15% of spending, beaten only by healthcare.19 By contrast, there are no sources of government funding or charitable efforts explicitly directed at improving institutional decision-making processes in the ways we’ve discussed. By contrast, despite the potential importance of artificial intelligence in the 21st century, we could only identify a handful of people working on systematic methods to forecast its speed of development and likely impacts (we interviewed one of those researchers on our podcast).

What’s more, there’s reason to think that focusing on institutions directly might be a more effective way to improve decision-making than a broad approach to improved education, as it targets a smaller set of people who already have a lot of influence, and focuses more on institutional processes.

It’s worth considering why this work hasn’t received a lot of attention so far, and whether it might be neglected for good reason. One such reason might be if it really is incredibly difficult to get large institutions to adopt new practices – we discuss some of these potential barriers in the next section. In addition, the kinds of large-scale controlled trials needed to rigorously test techniques can be expensive and time-consuming.

However, it might also be the case that this work is neglected simply because not many people are motivated to do it – perhaps most academics don’t find this kind of research all that intellectually rewarding. This suggests that if you are well-placed and motivated to do this work, it could be particularly impactful to do so.

What are the major arguments against this being pressing?

Difficult to get change in practice

Perhaps the main concern with this area is that it’s not clear how easy it is to actually get better decision-making strategies implemented in practice – especially in bureaucratic organisations, and where incentives are not geared towards accuracy.

It’s often hard to get groups to implement practices that are costly to them in the short-term – requiring training, resources, and changes from the status quo – and only promise long-term or abstract benefits. For example, there’s been some resistance to getting prediction markets implemented in practice, as running prediction markets with ‘real money’ is probably an illegal form of gambling (and even if you could get around this, it’s legally complex).20

However, these problems may be surmountable if we can find ways to show decision-makers that techniques will help them to achieve the objectives they care about.

For example, when the Behavioural Insights Team wanted to show that using RCTs could improve the quality of policy decisions, they initially focused on an area where it was relatively easy to show improvements on immediate outcomes that government cared about – saving money.

As we saw, BIT’s first project showed government that they could notably increase the number of people paying their taxes, saving the government hundreds of millions of pounds.21 Once they’d proven the effectiveness of their approach in this area, they were able to move into more complex policy areas with harder-to-measure outcomes, but where their impact could potentially be even higher, like education.22

Recognising the difficulties of getting change in practice might also mean it’s especially valuable for people thinking about this issue to develop an in-depth understanding of how important groups and institutions operate, and the kinds of incentives and barriers they face. It seems plausible that overcoming bureaucratic barriers to better decision-making may be even more important than developing better techniques. This is something we plan to explore more in future, by talking to more experts about the most effective ways to change, or work around, the kinds of constraints that influential decision-makers face in practice.

You might want to work on a more specific problem

Suppose you think that climate change is the most important problem in the world today. You might believe that a huge part of why we’re failing to tackle climate change effectively is that people have a bias towards working on concrete, near-term problems over those that are more likely to affect future generations. And so you might consider doing research on how to overcome this bias, with the hope that you could make important institutions more likely to tackle climate change.

However, if you think the threat of climate change is especially pressing compared to other problems, this might not be the best way for you to make a difference – even if you discover something useful about reducing the bias to work on immediate problems, it might be very hard to get that implemented in a way that’s going to directly make a difference to climate change.

Instead, it’s likely better to focus your efforts on climate change more directly, for example working for a think tank doing research into the most effective ways to cut carbon emissions.

The advantage of broad interventions like decision-making is that they can be applied to a wide range of problems. The disadvantage of working in this area is it might be harder to target your efforts towards a specific problem. So if you think one specific problem is significantly more urgent than others, and you have an opportunity to work on that problem more directly, then it is likely more effective to do so.

There might be better ways to improve our ability to solve the world’s problems

One of the biggest arguments for working in this area is that if you can improve the productivity or judgement of people working on important problems, then this increases the effectiveness of everything they do to solve those problems.

But you might think there are better ways to increase the speed or effectiveness of work on the world’s most important problems.

For example, perhaps the biggest bottleneck on solving the world’s problems isn’t poor decision-making, but simply lack of information – people may not be working on the world’s biggest problems because they’re lacking crucial information about what those problems are. Being more rational won’t help them if they don’t have that information.

A lot of work on promoting effective altruism might fall in this category – giving people better information about the effectiveness of different causes, interventions, and careers.

What can you do in this area?

We can think of work in this area as falling into several broad categories:

  1. More rigorously testing existing techniques that seem promising
  2. Doing more fundamental research to identify new techniques
  3. Fostering adoption of the best proven techniques in high impact areas
  4. Directing more funding towards all of the above

All of these areas seem pressing and seem to have room to make immediate progress (we already know enough to start trying to implement better techniques, but stronger evidence will make adoption easier, for example.) This means that which area to focus on will depend quite a lot on your personal fit and opportunities – we discuss each in more detail below:

1. More rigorously testing existing techniques that seem promising

Prof Tetlock’s work has identified thinking styles that lead to significantly more accurate predictions, as detailed in his book, Superforecasting.

The idea here would be to take techniques that seem promising, but haven’t been rigorously tested yet, to try to get stronger evidence of where and whether they are effective. Some techniques or areas of research that fall in this category:

  • Calibration training – improving the accuracy of probability judgements – has a reasonable amount of evidence suggesting it is effective. However, most calibration training focuses on trivia questions – testing whether this actually improves judgement in real-world scenarios could be promising, and help to get these techniques applied more widely.

  • Structured Analytic Techniques (SATs) are a number of techniques developed for reducing cognitive biases in intelligence analyses. Examples of SATs including checking key assumptions, and challenging consensus views. These seem to be grounded in an understanding of the psychological literature, but few have been tested rigorously (i.e. with a control group and looking at the impact on accuracy.) It could be useful to select some of these techniques that look most promising, and try to test which are actually effective at improving real-world judgements.23 It might be particularly interesting/useful to try to directly pitch some of these techniques against each other and compare their levels of success.

  • Methods of aggregating expert judgements, including Roger Cooke’s Classical Model for Structured Expert Judgement, which scores different judgements according to their accuracy and informativeness and then uses these scores to combine them,24 and the Delphi Method, a method for building consensus in groups, by using multiple iterations of questions to collect data from different group members.

Practically speaking, this probably means trying to get a position at a behavioural science/research lab interested in working on these kinds of questions and studies (which would very likely require getting a PhD in psychology or related area of behavioural science.) Some labs and researchers we know of whose research interests seem to fall in this area:

There may also be some non-academic organisations with funding for, and interest in running, more rigorous tests of known decision-making techniques:

  • IARPA is probably the biggest funder of research in this area right now, especially with a focus on improving high-level decisions.
  • Consultancies with a behavioural science focus, such as the Behavioural Insights Team, may also have the funding and interest in doing this kind of research. These organisations generally focus on improving lots of small decisions, rather than on improving the quality of a few very important decisions, but they may do some work on the latter (e.g. as mentioned above, BIT have focused not just on small ‘nudges’ but also on improving the quality of policy decisions at a high level by promoting the use of RCTs.)26

2. Doing more fundamental research to identify new techniques

You could also try to do more fundamental research; developing new techniques and approaches to improved judgement and decision-making, and then testing them. This is more pressing if you don’t think the existing techniques are very good.

One example of an open question in this area is: how do we judge “good reasoning” when we don’t have objective answers to a question? (i.e. when we can’t just judge answers/contributions based on whether they lead to accurate predictions or answers we know to be true.)27 Two examples of current research programmes related to this question are IARPA’s Crowdsourcing Evidence, Argumentaion, Thinking and Evaluation (CREATE) programme and Philip Tetlock’s Making Conversations Smarter, Faster (MCSF) project, so you could try to get involved with one of the teams working on these projects.

One example of a project funded by the aforementioned CREATE programme is Bayesian Argumentation via Delphi (BARD), which is a collaboration between UCL, Birkbeck, and Monash universities. BARD uses causal Bayesian networks and automated Delphi analyses to help groups of analysts develop, improve and present their analyses. Note that there’s a considerable software development component to this project – suggesting that part of improving decision-making may involve not just developing psychological techniques, but also developing software and tools that can aid decision-making.

You could also work on a similar project independently if you are a researcher who can get funding for this, or if you’re thinking about doing a PhD try to work with academics in related areas. The researchers at Monash, UCL, and Birkbeck mentioned above are some we’re aware of, but there are likely many others.

The academics and institutions listed in section 1. (on testing existing techniques) might also be promising places to work if you’re interested in developing new decision-making techniques.

3. Fostering adoption of the best proven techniques in high-impact areas

The website FiveThirtyEight.com has popularised data-driven forecasting methods.

Alternatively, you could focus more on implementing those techniques we currently think are most likely to improve collective decision making (such as the research on forecasting by Tetlock, prediction markets, or calibration training.)28 If you think one specific problem is particularly important, you might prefer to focus on the implementation of techniques (rather than developing new ones), as this is easier to target towards specific areas.

As mentioned above, a large part of ‘fostering adoption’ might first require better understanding the practical constraints and incentives of different groups working on important problems, in order to understand what changes are likely to be feasible. For this reason, working in any of the organisations or groups listed below with the aim of better understanding the barriers they face, might be valuable even if you don’t expect to be in a position to change decision-making practices immediately.

These efforts might be particularly impactful if focused on organisations that control a large number of resources, or organisations working on important problems. Here are some examples of specific places that might be good to work if you want to do this:

  • Any major government, perhaps especially in the following areas: (1) intelligence/national security and foreign policy, (2) parts of government working on technology policy (the Government Office for Science in the UK, for example, or the Office of Science and Technology Policy in the US), (3) development policy (DFID in the UK, USAID in the US), or (4) defense (the Department of Defense in the US, the Ministry of Defence in the UK). (We have upcoming career reviews on some of these paths.) Many parts of government are also now hiring for “behavioural science” specialist roles,29 which might be a good option if you’re qualified for them (which generally means a masters/PhD in psychology or a related discipline, and ideally some policy experience.)
  • Organisations that direct large amounts of money towards solving important world problems – such as foundations or grants agencies. e.g. the Gates Foundation, or the Open Philanthropy Project. See also our profile on working as a foundation grantmaker.
  • Think tanks and academic institutions focused on questions about the long-run future of humanity, which require making judgements about speculative long-term scenarios, such as the Future of Humanity Institute in Oxford and the Centre for the Study of Existential Risk in Cambridge.

You could also try to test and implement improved decision-making techniques in a range of organisations as a consultant. Some specific organisations where you might be able to do this, or at least build up relevant experience, include:

  • Working at a specialist “behavioural science” consultancy, such as the Behavioural Insights Team or Ideas42. Some successful academics in this field have also set up smaller consultancies – such as Hubbard Decision Research (which has worked extensively on calibration training).30
  • Good Judgement is an organisation founded on the basis of Tetlock’s successful research project on forecasting, and who now run training for individuals and organisations applying these findings to improve predictions.
  • HyperMind, an organisation focused on wider adoption of prediction markets.
  • Going into more general consultancy, with the aim of trying to specialise in behavioural science or decision-making – see our profile on management consulting for more details.

Finally, you could also try and advocate for the adoption of better practices across government and organisations, or for improved decision-making more generally – if you think you can get a good platform for doing so – working as a journalist, speaker, or perhaps as an academic working in this area.

Julia Galef is a good example of someone who has followed this kind of path. Julia worked as a freelance journalist before co-founding the Centre for Applied Rationality, co-hosted the podcast Rationally Speaking (which she still runs today) , and started a YouTube channel with tens of thousands of followers. She’s now writing a book on improving your own judgement, while running the Update Project, which focuses on helping decision makers improve their models of the world and resolve disagreements more productively. More broadly, she’s aiming to build an intellectual community of influential people across a range of different fields, who are genuinely trying to be truth-seeking and to resolve disagreements in a productive way. You can learn more about Julia’s career path by checking out our interview with her here.

4. Directing more funding towards research in this area

One challenge for all of the above areas is that it may be difficult to get funding for the kinds of work and research involved. Another approach in this area, therefore, might be to move a step backwards in the chain and try to direct more funding towards work in all of the aforementioned areas: developing, testing, and implementing better decision-making strategies.

The main place we know of that seems particularly interested in directing more funding towards improving decision-making research is IARPA in the US.31 Becoming a program manager at IARPA, if you’re a good fit and have ideas about areas of research that could do with more funding,is therefore a very promising opportunity. There’s also some chance the Open Philanthropy Project may invest more time/funds in exploring this area (they have previously funded some of Philip Tetlock’s work on forecasting). Otherwise you could try and work at any other large foundation with an interest in funding scientific research where there might be room to direct funds towards this area.

If you work at any of the organisations listed above, you might also try to advocate for more funds to be directed towards testing and implementing improved decision-making practices, even if you’re not in a position to do the work yourself.

If you don’t have the relevant background to do research or implement better practices yourself, but you think this is important and you’re in (or think you could work up to) an influential position in an important organisation, then you might be able to allocate more funding towards improving decision-making practices where you work (e.g. by funding tests, hiring someone who would be able to do this work.) If you’re in a position to do this, this could be even higher-impact than working somewhere like IARPA, where there already seems to be a lot of motivation to direct funding towards these problems.

We don’t currently think that there are many great direct donation opportunities in this area, and so this probably isn’t the best way to have an impact – at least not for relatively minor donors. If you’re a larger donor, though, you might consider funding academics to do the sort of research outlined in points 1 and 2, or even trying to set up an organisation to conduct and/or fund more of this kind of research.

Next steps

Learn more:

Take action:

  • Consider getting a PhD in behavioural/decision science one of the groups mentioned above. See if you can find a supervisor and/or funding that would allow you to run large trials of existing techniques, or research new strategies for improved decision-making.
  • If you already have the relevant background (e.g. a PhD, or policy experience), try to work in government (or a similarly influential organisation) where you might be able to either work on implementing better practices directly, or direct resources towards others who can do this work.

Get free, one-on-one career advice

We’ve helped hundreds of people compare between their options, get introductions, and jobs important for the the long-run future. Find out if our coaching can help you:

Read more

Notes and references

  1. “The intelligence community stated it was highly probable that WMD programs existed in Iraq — statements interpreted as 100% certainty by policymakers such as President Bush.” Chang et al (2016). Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments. Judgement and Decision Making, 11, 5

  2. Moore, D., Oesch, J., Zietsma, C. (2007). What Competition? Myopic Self-Focus in Market-Entry Decisions. Organization Science. 18, 3, pp.440-454

  3. Moore, D. A., Tenney, E. R., & Haran, U. (2015). Overprecision in judgment. The Wiley Blackwell handbook of judgment and decision making, 182-209. “We review some of the evidence on overprecision in beliefs. This evidence comes from the lab and the field, from professionals and novices, with consequences ranging from the trivial to the tragic. The evidence reveals individuals’ judgments to be overly precise—they are too sure they know the truth.”

  4. “What really happened was typical I think in large bureaucratic organizations, and any big organization where you’re frankly trying to be a hero in doing your job. And NASA had two strikes against it from the start, which one of those is they were too successful. They had gotten [sic.] by for a quarter of a century now and had never lost a single person going into space, which was considered a very hazardous thing to do. And they had rescued the Apollo 13 halfway to the moon when part of the vehicle blew up. Seemed like it was an impossible task, but they did it. […] it gives you a little bit of arrogance you shouldn’t have. And a huge amount of money [was] involved. But they hadn’t stumbled yet and they just pressed on. So you really had to quote “prove that it would fail” and nobody could do that.” Shuttle challenger: Does overconfidence impede decision making?

  5. “In political forecasting, Tetlock (2005) asked professionals to estimate the probabilities of events up to 5 years into the future – from the standpoint of 1988. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Experts were frequently hard-pressed to beat simple actuarial models or even chance baselines (see also Green and Armstrong, 2007).” Mellers et. al. (2015). The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics. The Journal of Experimental Psychology: Applied. 21, 1, pp/1-14

  6. In particular, the development of nuclear weapons means that we now have the ability to kill millions or perhaps even billions with one decision.

  7. Behavioural Insights Tax Trials Win Civil Service Award

  8. The Behavioural Insights Team Update Report 2015-16

  9. Tetlock’s book, Superforecasting outlines the lessons learned from the Good Judgement Project, where different teams of people made predictions about real-world events, and randomized controlled trials were used to identify some of the most effective prediction methods. The most accurate 2% of predictors were dubbed “superforecasters.” When superforecasters were grouped together into teams, it’s been claimed that their predictions were more accurate than predictions made by professional intelligence analysts with access to classified information.

  10. Arrow, K.J. et al (2008). The Promise of Prediction Markets. Science Magazine.

  11. The Good Judgement Project and much of Tetlock’s other research on forecasting was funded by the Intelligence Advanced Research Projects Activity (IARPA) as part of the Aggregative Contingent Estimation (ACE) program.

  12. Hubbard, D., Seiersen, R., and Geer, D. (2016). How to Measure Anything in Cybersecurity Risk.

  13. {A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis](http://www.analysis.org/structured-analytic-techniques.pdf). Prepared by the US Government.

  14. Helmer-Hirschberg, Olaf. Analysis of the Future: The Delphi Method. Santa Monica, CA: RAND Corporation, 1967.

  15. While management consultancies typically have a lot of experience with ‘business strategy’ – improving the strategy, structure, management and operations of an organisation – they’re not typically experts in the errors and biases of human judgement, or in finding ways to overcome them. https://www.theguardian.com/careers/what-does-management-consultant-do

  16. This research tends to focus on developing more accurate descriptive accounts of human decision making, while prescriptive accounts, focusing on how judgement could be improved, are relatively rare within social science research (though this is gradually changing.) “If we want to influence business and government, we must have useful advice to share. Yet most researchers in the other social sciences offer only descriptive research… As a graduate student in the late 1970s, I was trained to be descriptive, prescription was for consultants, not for serious researchers.” Bazerman, M. (2005). Conducting Influential Research: The Need for Prescriptive Implications. The Academy of Management Review, 30, 1, pp.25-31

  17. “Scenario exercises are promoted in the political and business worlds as correctives to dogmatism and overconfidence. And by this point in the book, the need for such correctives should not be in question. But the scenario experiments show that scenario exercises are not cure-alls. Indeed, the experiments give us grounds for fearing that such exercises will often fail to open the minds of the inclined-to-be-closed-minded hedgehogs but succeed in confusing the already-inclined-to-be-open-minded foxes—confusing foxes so much that their open-mindedness starts to
    look like credulousness.” Tetlock, P. E. (2017). Expert political judgment: How good is it? How can we know?. Princeton University Press. Chapter 7.

  18. Source: https://data.oecd.org/eduresource/public-spending-on-education.htm#indicator-chart

  19. Source: http://blog.givewell.org/2012/05/08/what-large-scale-philanthropy-focuses-on-today/

  20. https://www.quora.com/Are-prediction-markets-an-illegal-form-of-gambling

  21. In 2013 alone, the suggested improvements to tax reminder letters from BIT’s trials are estimated to have brought forward £210 million of tax revenue.

  22. Behavioural Insights for Education – a practical guide for parents, teachers and school leaders.

  23. The biggest existing program testing the effectiveness of different SATs is the CREATE program, funded by IARPA.

  24. http://rogermcooke.net/rogermcooke_files/Cross%20Validation%20SEJ%20RESS.pdf

  25. http://www.tandfonline.com/doi/abs/10.1080/08850607.2016.1230706?src=recsys&journalCode=ujic20

  26. These organisations currently focus mostly on using behavioural science to “nudge” people or inform public policy interventions – as opposed to actually informing the way decisions are made in public policy – but there may well be increasing interest in the latter.

  27. Many of the most important kinds of decisions policymakers have to make aren’t questions with clear, objective answers, and we don’t currently have very good ways to judge the quality of reasoning in these cases. Effective solutions to this seem like they could be very high impact, but we don’t currently know whether this is possible/what they would look like.

  28. Note there’s some overlap here with point 1. above – more rigorously testing existing techniques – since part of what’s required to foster adoption will be providing people with evidence that they work! So going to work at a more “applied” organisation with an interest in rigorous evaluation might provide an opportunity to work on both getting better evidence for existing techniques, and getting them implemented.

  29. Besides the Behavioural Insights Team who work across government departments (in both the UK and the US), there are behavioural science teams in many government departments in the UK (including the Department for Education, Department for Transport, and the intelligence agencies), and in the White House.

  30. It’s worth noting, though that many of these consultancies work mostly with corporate clients, and so this might not be the best opportunity for immediate impact – but it might be a good way to test out improved decision-making strategies in different environments, from which we could learn about how to apply these techniques in more important areas.

  31. IARPA were the main funders behind a lot of the research on forecasting, for example, and have a number of past and open projects focused on this area, including ACE, FUSE, SHARP, and OSI.