Robert Solow once referred to the law and economics scholar Richard Posner as writing books the way the rest of us breathe. Andrew Leigh seems to be in this category with his output apparently accelerating on top of his no doubt gruelling schedule as an MP, not to mention being a father of three.
Anyway, I’ve not yet read his latest but I did go to the Melbourne launch of his book where he lavished the breadth of his learning on his audience. I would have liked a somewhat greater awareness of the foibles of what Hayek called scientism in his speech.
Randomised controlled trials definitely have some very worthwhile things to offer policy making and Andrew’s speech makes that case compellingly. I also endorse his support for randomisation as a modus operandi – not just for all singing, all-dancing RCTs costing hundreds of thousands of dollars by academics, but also for every day randomisation in the way that’s proposed in the Lean Start-up and practised by the most successful IT firms like Google and Amazon.
But I’ve got an uneasy feeling about how randomisation so easily takes on the mantle of ‘gold standard’ for evidence – something repudiated by numerous scholars such as Angus Deaton and James Heckman. Here’s Hayek in 1942, but he held the same views up to his death around forty years later:
In the hundred and twenty years or so during which this ambition to imitate Science in its methods rather than its spirit has now dominated social studies, it has contributed scarcely anything to our understanding of social phenomena… Demands for further attempts in this direction are still presented to us as the latest revolutionary innovations which, if adopted, will secure rapid undreamed of progress.
This idea that we can prove up ‘what works’ and then build a management system around it is OK as a meta-idea but only if it’s pursued with the scientific caveats that it requires. Alas managers and politicians are impatient with such things. I fear Andrew might be a little impatient with it also. And so, just as academia pumps out graduates who have been carefully trained to generate and operate any number of sophisticated models but have been poorly trained, if they’ve been trained at all, to understand their respective merits and limitations, so it would be easy for whole systems to be built which generate knowledge using randomised trials, but show little care in understanding precisely how far that knowledge can be generalised – how constrained to its context it is. I tried to explore this terrain in my own dinner address to the Australian Evaluation Society Annual Conference last year.
In any event, these issues may be dealt with in the book. Be that as it may, Andrew gave a great account of himself and I warmly recommend his speech, reproduced below the fold, to all. You’ll learn a lot. I did anyway. Continue reading