Are the parties getting the resources where they need them?

The complexity of the upcoming election looks set to make vote distribution central to the outcome. In other words, parties’ strengths in Westminster will be decided not just by how many votes they get but also where in the UK those votes are actually cast. This in turn means that the distribution of finite party resources during the election campaign could have a decisive impact. Recent research has shown that effective local campaigning can have a measurable impact on the result in a constituency, which also means that getting the right resources in the right places can be vital.

In the abstract then (as illustrated in Figure 1), we would assume that a party would want financial resources going to their most marginal constituencies, either those that they are seeking to take or seeking to defend. In contrast, there would be little value in a major share of financial resources going to safe seats, whether they are held by the party or by their opponents.

Figure 1: Theoretical ideal relationship between donations and constituency marginality

Figure 1: Theoretical ideal relationship between donations and constituency marginality

Things do not work out like this in real life, however. This is because political geography and some of the incentives for giving to constituency parties undermine this theoretical ideal. In the case of the Conservatives, for example, one major problem is that their richer local associations tend to be in wealthier areas of the country, which also happen to have the safest Conservative seats. Labour suffers from a different structural problem, due to the party’s historic links with the Trade Union movement. The challenge here is that the Trade Unions actually have two distinct and often contradictory incentives. They clearly want a Labour government (which creates incentives for giving donations to the marginal constituencies needed to construct a Labour majority), but they also want long-serving MPs supportive of the trade union movement (creating incentives for giving donations to Labour MPs and candidates in safe seats).

So given these institutional factors, it seems worth asking how the parties are actually doing at getting resources to where they are needed. By combining data from the Democratic Dashboard project, which includes information on political donations given to every constituency in 2014, with Pippa Norris’s 2010 constituency results database, it is possible to look at the relationship between total donations and marginality.

The graphs below show the three major parties in turn.  

 

Figure 2: Conservative constituency donations by Conservative win / lose margin in seats in GE2010 Correlation r=-0.197, p ≤ 0.001. (Con Margin indicates the majority in Conservative-held seats or how many votes they were behind the incumbent in seats where they are the challenger).

Figure 2: Conservative constituency donations by Conservative win / lose margin in seats in GE2010

Correlation r=-0.197, p  0.001. (Con Margin indicates the majority in Conservative-held seats or how many votes they were behind the incumbent in seats where they are the challenger).

Figure 3: Labour constituency donations by Labour win / lose margin in seat in GE2010 Correlation r=-0.203, p ≤ 0.001. (Lab Margin indicates the majority in Labour-held seats or how many votes they were behind the incumbent in seats where they are the challenger).

Figure 3: Labour constituency donations by Labour win / lose margin in seat in GE2010

Correlation r=-0.203, p  0.001. (Lab Margin indicates the majority in Labour-held seats or how many votes they were behind the incumbent in seats where they are the challenger).

Figure 4: Liberal Democrat constituency donations by Liberal Democrat win / lose margin in seat in GE2010 Correlation r=-0.294, p ≤ 0.001. (LD Margin indicates the majority in Liberal Democrat-held seats or how many votes they were behind the incumbent in seats where they are the challenger).

Figure 4: Liberal Democrat constituency donations by Liberal Democrat win / lose margin in seat in GE2010

Correlation r=-0.294, p  0.001. (LD Margin indicates the majority in Liberal Democrat-held seats or how many votes they were behind the incumbent in seats where they are the challenger).

These graphs lead to three observations. First, and most obviously, constituencies across the UK seem to have access to a huge range of financial resources. There are some really big outliers, such as the Conservative Party in Watford, which received more than £234k in donations in 2014 or the Labour Party in Pendle that got £241k (the Democratic Dashboard classifies both of these seats as ultra-marginal, so it is not entirely surprising). In contrast, no donations were declared by any of the three major parties in 211 constituencies in the UK. Second, it is interesting to note that the distribution of the data points in the graph may reflect political circumstances. This is perhaps most obvious in the case of the Liberal Democrats, where the vast majority of constituencies with access to greater resources seem to be seats that the Lib Dems already hold with majorities of less the 12,000 votes. Given the Liberal Democrats current national poll ratings, this might reflect a very efficient distribution of resources, offering the party the best possible chance of constructing a fortress in the seats it currently holds.

The final point relates to the statistical relationship between seat marginality and the donations given to political parties. In the case of all three parties, there is a statistically significant negative correlation between the marginality of the seat and the donations a constituency party receives, as we would expect. However, it is notable that Labour (r=-0.203, p ≤ 0.001) has a slightly higher correlation than the Conservatives (r=-0.197, p ≤ 0.001). This would seem to indicate that Labour is distributing its resources slightly more efficiently than the Conservatives. The Liberal Democrats correlation is higher still (r=-0.294, p ≤ 0.001). This is likely a function of the Liberal Democrats having virtually no presence in some constituencies, either in terms of votes or donations. But the shape of the Liberal Democrat graph points towards a fairly efficient distribution of resources, which could prove to be important on Election Day.

There are some big caveats to these graphs. The donations data is from 2014. Much might have changed in the first few months of 2015, with political givers becoming more aware of the key election battlegrounds. It should also be noted that this dataset is based on donations, not spending. So a safe seat with lots of resources might choose to spend its money hiring a mini-bus to allow its party members to deliver leaflets in a more marginal seat, for example. An additional problem is the line between national and constituency campaigning. As Liberal Democrat blogger Mark Pack has pointed-out, UK campaign finance law allows for lots of “ersatz local campaigning” that is actually regulated at the national level. Finally, the analysis presented here is rather crude. It would be interesting to build a more complex dataset, employing other variables that might be important to constituency party fundraising (such as whether the incumbent in a constituency is a nationally high-profile politician or whether a Labour MP is trade union sponsored, for example).

Nonetheless in an election with ultra-fine margins where local campaigning looks set to be hugely important, even this limited analysis points towards some interesting questions about how effectively parties are distributing resources. 

The Scottish election now seems to be a debate about full fiscal autonomy. But does this open up the issue of “West Lothian Question Max”?

The two Scottish Leaders’ debates staged in the last week have been fairly refreshing. The first reason for this is to do with the personalities involved. In Nicola Sturgeon, Jim Murphy and Ruth Davidson, Scotland has three very competent and articulate political leaders. Additionally though, the debates in Scotland actually saw substantive disagreements emerging between the parties. Trident is one obvious issue where this occurred. However, perhaps the most important area of debate that has been flagged up is the question of full fiscal autonomy for Scotland.

Full fiscal autonomy would mean Scotland becoming self-reliant in terms of its taxing and spending. Unsurprisingly, the SNP are fully in favour of this measure. Labour, the Conservatives and the Liberal Democrats have come out against the policy, citing a report by the Institute of Fiscal Studies arguing that this arrangement would create a £7.6 billion “black hole” in Scotland’s budget.

Whatever the rights and wrongs of this claim, it is unsurprising that the unionist parties are deploying it. On one level, it is clearly a logical stance for unionists to cite the benefits of shared fiscal arrangements across the constituent nations of the UK. As importantly though, this argument also re-raises the spectre of economic uncertainty, employing rhetoric similar to that used by Better Together during the referendum campaign.

Clearly, the Scottish election campaign is going to be vitally important to the overall election result and the government that emerges from it. However, the discussion of full fiscal autonomy suggests that it could also be vitally important to the constitutional future of the UK. Certainly the issue raises some very big questions, which have not really been addressed yet.

In the shorter term, it would be interesting to know whether either the pro-full fiscal autonomy SNP or the anti-full fiscal autonomy Labour Party regard the issue as a redline in any negotiations that might occur post-election. If both parties were to see the issue as something they could not compromise on, this could considerably complicate any arrangement they might try to form together. This problem could be further exacerbated by an election result that sends very mixed messages about whether people in Scotland want full fiscal autonomy. On the one hand, it looks likely that the SNP will be by far the largest party in Scotland in terms of MPs, sweeping all before them. That said, the majority of voters in Scotland might still have voted for parties that are against full fiscal autonomy, if support for Labour, the Conservatives and the Liberal Democrats is combined. In this situation, what claims could legitimately be made about the wishes of the Scottish electorate?  

The second issue is longer term, and arises in the event of the introduction of full fiscal autonomy. It is perhaps best termed “West Lothian Question Max” - in other words, an extension of the original problem identified with devolution by Labour MP Tam Dalyell in the late 1970s. Dalyell argued that establishing a devolved parliament with jurisdiction over certain areas of Scottish policy created a fundamental inequality in the British law making process: English MPs in Westminster would have no say over these devolved areas of Scottish policy, but their colleagues elected for Scottish constituencies would be able to vote on legislation for England. This is not just a hypothetical problem: research by MySociety found that 21 votes in Parliament since 1997 would have had different results if Scottish MPs were excluded. This is actually a relatively small number considering that MPs voted 5000 times during this period. However, the votes where Scottish MPs proved decisive included some very high profile legislation, such as the introduction of Foundation Hospitals in November 2003 and the increase in university tuition fees in January 2004. Neither of these measures had any direct effect on Scotland.

Of all the parties in Scotland, it is the SNP that has most directly acknowledged the West Lothian question in its behaviour, committing that its MPs in Westminster would only vote on matters that directly impact Scotland. However, this stance might have proved problematic in the event of a hung parliament, so it was not surprising in January when Nicola Sturgeon broadened the definition of legislation that SNP MPs would vote on. Sturgeon argued that:

"On health, for example, we are signalling that we would be prepared to vote on matters of English health because that has a direct impact potential on Scotland's budget. So, if there was a vote in the House of Commons to repeal the privatisation of the health service that has been seen in England, we would vote for that because that would help to protect Scotland's budget."

However, the model of full fiscal autonomy advocated by the SNP would torpedo this argument. Scotland’s NHS would be wholly financially independent of the system in England, so the rationale for taking a broader role in Westminster politics would disappear. This would be true of any Budget bill proposed in Westminster. Yet one of the requirements of even the loosest arrangement between Labour and the SNP would be passing a Budget through the Commons. However, (outside the relatively narrow areas that full fiscal autonomy would preserve for UK-wide spending, such as defence and foreign policy) this bill would have absolutely no influence on Scotland. This is the central challenge of the “West Lothian Question Max”, created by the combination of full fiscal autonomy and a government needing MPs elected in Scottish constituencies to sustain itself in office.

This raises two interesting further questions. First, is there a rhetorical strategy that Labour and the SNP can develop to justify this situation? In other words, even in a political set-up with full fiscal autonomy, will it still be possible to argue that there remains a shared political destiny shaped by decisions taken in Westminster? Second, how will voters across the UK react to such a situation, if it were to arise, and what might this mean for the future constitutional stability of the United Kingdom?

Thank you Google Street view for a lovely memory

Google is sometimes quite a controversial organisation. The amount of information it holds about us and our world across its various services is vast, and is correctly a cause for concern.

Google Street View has been one of these controversial services. This has been most keenly felt in Germany, where debate about the service has reached the highest echelons of government. Even in the UK, where people have been more receptive to the idea, there have been various stories of people caught in embarrassing situations by the 360 degree cameras on the Google cars. However, I discovered this weekend that I owe Google Street View a big thank you.

We got Gemini when I was 14, and she was my families's cat for 18 years. She had a nice and very simple life, enjoying going out to the garden and sitting on the front step sunning herself, as well as the occasional plate of tuna. In cat year terms, she was 88 years old, so it was hardly a surprise when she fell ill in November of last year and there was nothing the vets could do. But that does not mean she is not very missed. In fact, I was back at family home just this weekend, and it seemed very empty without her. No matter how old they are, losing a pet (especially the pet you grew up with) is sad. 

And then we discovered something brilliant. Actually credit for this goes to my brother. He was browsing Google Street View and, as you do, started to look at addresses he knew. And then he noticed something.

It is perhaps more obvious on closer examination. 

Needless to say, this made me very happy indeed. 

Back from sabbatical – let the election commence!

As those of you who follow me on Twitter will know, I have largely been avoiding all social media for the past few months, as I have been on a research-focused sabbatical since September. This has provided a fantastic period for me to reflect and start new research projects, but now I am back and feeling refreshed ready for what promises to be one of the most exciting years in British politics in recent decades.

One of my new year’s resolutions was to blog a bit more, as I think it is a great way to get early versions of research work or just odd ideas down on (virtual) paper. In that spirit, I wanted to reflect briefly on a few things that strike me about the upcoming election. No particular order, but five points of interest that could become more prominent in the coming months:

  • The focus of the political class has never been so divided. Yesterday, it seemed that both major political parties were running with two versions of the same election slogan, roughly “Your have four more months to save…” For the Conservatives, the conclusion of the sentence was the economy. For Labour, it was the NHS. Within a few hours, predictably, Nigel Farage, popped up saying that the election was really about immigration. There is a very clear battle to set the agenda for the election, and the result might well play a defining role in who wins it.
  • This election will be about segmentation, both geographical and social. My guess is that the total number of votes won by the various parties will play only a limited role in generating the outcome of the election. What might be far more important is how efficiently parties are able to concentrate their vote in the particular places they need them to  win seats. Which leads me to the next important point…
  • Opinion polling is going to change. It was great to see Chris Hanretty’s excellent election prediction work featured on Newsnight yesterday evening. The decline of universal national swing has opened the door for a whole host of new prediction techniques – most famously espoused by Nate Silver in the US – that draw on more complex statistical models and broader datasets. The coverage these methods are getting really demonstrates that the Gallopian paradigm of public opinion research (purportedly, seeking to sample the voice of the nation) is under attack. Instead, there is a growing interest in sub-samples and specific groups deemed to be of importance to the outcome. Prediction is also increasingly probabilistic in nature.
  • That said, the mass is not quite dead. Labour has claimed that they will base their campaign around talking to people and employ social media to mobilise activists. This is certainly a good approach for a party that lacks the financial resources of its rival. However, Labour risks neglecting the important lesson of Obama’s use of activism in the US. His success was not just in mobilising activists, but building links between his keenest supporters and the apolitical mainstream. This worked at two levels. The obvious tangible example was in fundraising, where Obama harnessed his support-base’s willingness to give as a mechanism to compete for mainstream voters. He also effectively mobilised his activists to do direct contact campaigning. But additionally, and as importantly, he built symbolic links between what his activists felt about the campaign and what mainstream America felt about politics. Of course, we can question the extent to which Obama actually “did this”, as opposed to it being created by broader political, economic and social patterns. But it was vital.
  • Pre-election will matter post-election. The Liberal Democrats might argue that the public haven’t been entirely fair to them this parliament. After all, no modern Westminster politicians have any real experience of coalition government, which inevitably involves compromises. And, given the number of 2010 Liberal Democrat voters likely to move to Labour in 2015, the supreme irony is that Ed Miliband’s chances of ending up in Downing Street are only really still standing because of Nick Clegg’s veto of boundary changes. Yet the Liberal Democrats were astonishingly naïve. During the 2010 election, they emphasised policies (notably tuition fees), which it very soon became clear were not their top priorities in coalition negotiations. One lesson from the 2010-15 is that parties will need to think a lot more carefully about their post-election game plan, and how this links to what they say during the campaign.

As I said, just some sketchy thoughts. But I am going to try to blog more in the coming months.

I will also be posting some provisional findings from my sabbatical work in the next few weeks, a big data analysis of 37 million words published by UK think tanks in the past decade. 

Pity the pollsters. This is a tough one.

Another day, another poll appears for the Scottish referendum. This time it is an ICM poll with the Guardian, and – among those who have made their mind up – it puts No at 51 per cent and Yes at 49 per cent. 17 per cent of those polled remain undecided. The past week of the campaign has been notable for the huge role played by polls in driving both the news and the political agenda. Indeed, the devo-max offer from the three major parties at the beginning of the week largely seemed to occur because a YouGov poll put the Yes camp in the lead for the first time.

I have recently been writing on the idea of public opinion, and one really interesting thing that comes across from the literature is the tension between polling as a science and an art. As Susan Herbst details in her book Numbered Voices, an account of the early days of the modern opinion polling industry in the United States, one of the great rhetorical innovations by George Gallup and his contemporaries was arguing that public opinion could be measured in a scientific way, certainly in comparison with older methods such as straw polls. But the truth is that, no matter how rigorous the method, opinion polling has always required a healthy dose of creative thinking and skilful judgement.

This truth is especially evident in the case of the referendum, as there are so many factors which might have an impact on the final result. Going into the last few days of the campaign, I would list five unknowns that mean we should take all polls, no matter how well constructed, with a big pinch of salt:

  1. What does “don’t know” actually mean? Journalist Dan Hodges was quick to tweet after the release of today’s poll that “don’t know” was a euphemism for “no”. The theory here seems to be quite close the shy Tory factor or Bradley effect, namely that people have already made up their mind on voting no, but are not willing to publicly admit it. This may or may not be true, but it certainly seems that – even if people are genuinely undecided – then they might make their mind up in a predictable way, which would seem to make it more likely they would support the status quo.
  2. Modelling the electorate is fiendishly difficult. General elections are relatively easy to model, as pollsters have a wealth of data on previous contests. When an interviewee says they are “quite likely” to vote, for example, then how that is understood is based on a range of pre-existing polling and turnout data. But the referendum is a unique case. Partially, this is because it is asking an unprecedented question. In addition, allowing 16-18 year olds the vote adds a completely new cohort of would-be voters. But perhaps most importantly, the 97 per cent registration rate reported earlier this week is completely unprecedented. This means that many more members of the public will be eligible to go to the polling booths on the day of the vote than has ever previously been the case (whether they do or not is a very different matter. See point 4 below). 
  3. What impact could postal voting have? Sky News is already reporting stories about postal voters who are “regretting their choice”. Nearly 800,000 people will vote by post. This has a couple of practical ramifications. The first is that postal voters, obviously, will be immune from events in the last days of the campaign. The second is that postal voting of this kind presents a methodological issue for pollsters. Polls are a snapshot of public opinion at the time they are taken, so we would assume that the polls immediately before the referendum will offer the most accurate predictions. However, when nearly 20 per cent of the electorate have already cast their vote, it means that the same poll might not accurately reflect their votes.
  4. Who will be able to get their vote out? A lot has been made of the Yes campaigns grass roots mobilisation, as distinct from the more traditional top-down approach of Better Together. These characterisations are probably a bit glib, but there is no doubt that, while Better Together can fall back on the organisational muscle of the Labour Party, the Yes campaign has linked itself with a broad range of civic and political groups. Given the very high registration rate among voters and the seeming closeness of the race, effective get out the vote efforts from either side might carry the day.
  5. How do we understand Labour supporters moving into the Yes camp? Slightly less of a polling issue this one, but one really interesting element of the referendum campaign thus far has been the number of Labour supporters who are moving into the Yes camp (as Peter Kellner notes in his commentary on the recent YouGov poll that put Yes in the lead). There are interesting parallels here with Rob Ford and Matthew Goodwin’s important revisionist work on UKIP, where they argue that a significant part of UKIP electoral support is coming from former and natural Labour voters. Ford and Goodwin’s argument is that this UKIP-supporting group feels estranged from the modern Labour Party, Westminster politics and deeply economically insecure – many of the same characteristics that are driving the movement towards the Yes camp in Scotland.

Of course, it may be that, come next Friday, the pollsters have got it dead right. It may also be that a last minute swing to one side makes these variables of academic interest only. But, in the meantime, spare a thought for the pollsters grappling with one the most difficult challenges they will have ever faced.

  

ICA Presentation: Big Data and Public Opinion

Last week, I was lucky enough to go to Seattle to present a paper at the International Communications Association annual conference. This was my first ICA, and I enjoyed the experience greatly. I featured on a wonderful panel entitled Really Useful Analytics and the Good Life with my colleague Nick Couldry, as well as Helen Kennedy and Giles Moss from Leeds University, and Caroline Basset from Sussex University.

The slides from my presentation are available here, but in this blog entry, I just wanted to outline the core shape of my argument, which will hopefully provide a framework for future work.

The first thing to say is that this paper was rather different to the work I have previously done in this area. With Ben’Loughlin, I have written a lot about what we have termed semantic polling (Anstead and O'Loughlin, Forthcoming). In these pieces, we worked to both understand and theorize about new research techniques that harvest vast amounts of data from social media (normally Twitter) to understand how the public are reacting to specific events or politicians. In those earlier papers, Ben and I tried to think about different understandings of public opinion – outside the dominant opinion polling paradigm established in the 1930s – and thought about how they problematized the arguments related to semantic polling.

The datasets used by semantic pollsters are certainly big, maybe running into many millions of tweets. However, for the paper at ICA, I wanted to draw a distinction between big data (defined simply through the size of the dataset or number of data points being worked with) and Big Data. The latter is distinct because it employs a fundamentally different epistemological framework to traditional social science research methods. This argument is clearly put in a couple of places. Most famously (or infamously, depending on perspective) is Chris Anderson’s claim that theory is now irrelevant.

“Out with every theory of human behaviour, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves”(Anderson, 2008).

More recently Mayer-Schonberger and Cukier have argued that:

“The era of big data challenges the way we live and interact with the world. Most strikingly, society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what” (Mayer-Schonberger and Cukier, 2013).

Such arguments have proved to be very divisive for obvious reasons (Couldry, 2013), yet there ramifications are certainly worth considering. Clearly government, political parties and other civic organisations have a great interest in big data and what it can tell them about the public. At the same time, traditional methods for understanding public opinion are, for various reasons that I detail below, struggling or at least evolving rapidly. So the question is: do we need a new theory of public opinion to cope with these developments?

As noted by Herbert Blumer as far back the as the 1940s (1948), public opinion research has always been rather adverse to theory, instead focusing its energies on practical methodological issues. However, one rather useful historically grounded theoretical framework has been outlined by the American academic Susan Herbst. Employing the idea of what she terms infrastructrues of public opinion, Herbst argues two things: first, that the definition of public opinion varies across time and place; and second that the definition actually has three components. These are shown, with historical examples, in Table 1 below (derived from Herbst and Beniger, 1994, Herbst, 2001).       

Two previous examples of public opinion infrastructures, derived from the work of Susan Herbst

Two previous examples of public opinion infrastructures, derived from the work of Susan Herbst

An infrastructure of public opinion therefore consists of a method for measuring public opinion; an understanding of politics which shapes that public and how it is conceived; and forums in which public opinion is discussed. This tripartite model has taken quite distinctive forms in different historical periods and geographies, as the comparison between pre-revolutionary France and the mid-twentieth century United States in the table indicates.  

Before discussing how we might fit the development of Big Data research into this model, it is also worth noting something about more traditional techniques and understandings of public opinion. In many ways, the mid-twentieth century US paradigm described above persists, at least in the way we talk about public opinion. However, there are a number of reasons to suggest that this infrastructure of public opinion is in decline. These include:

  • The growing role for qualitative research. While opinion polling still plays a huge role in the development of political strategy, recent decades have seen growing prominence for qualitative researchers. While most researchers would claim that both techniques have to be combined for a rich understanding of public opinion, it is interesting to note that the most famous political researchers in the UK in recent decades have tended to be more associated with qualitative research than with polling, while the focus group has taken on a hugely important symbolic significance in contemporary politics (Gould, 2011, Mattinson, 2010, Schier, 2000).
  • Declining response rates to telephone surveys. This is a much considered problem, especially for American pollsters. It is now not uncommon to get response rates in the single digits, which is undermining traditional methodological approaches to public opinion research (Groves, 2011).
  • The development of internet panel surveys. The development of new online methods have challenged traditional telephone and face-to-face methods, and changed the market place for public opinion research (AAPOR, 2009).
  • The use of more complex statistical modelling techniques. Partially as a result of lower response rates and partially because of internet panel surveys, it can now be argued that pollsters have moved from sampling the population to modelling it. In short, the poorer quality of the raw data going-in (be this because of the inherent biases of online panel polls or lower response rates for telephone samples) means that more statistical jiggery-pokery is required to create representative numbers (Groves, 2011).
  • The rise of alternative metrics and predictors of public opinion. Opinion pollsters no longer have the field to themselves. Most famously Nate Silver employs Bayesian predictive modelling to predict US elections, while new social media research techniques have claimed reflect public opinion (Anstead and O'Loughlin, Forthcoming, Silver, 2012).   

If we want to bind many of these trends together in an over-arching narrative, it perhaps relates to the decline of mass society. Traditional opinion polling, certainly as conceived by George Gallup and his contemporaries (and characterised by Herbst), was focused on understanding the political nation, as a singular entity. However, as the political nation has become more complex and differentiated, this model has started to look a lot less applicable. Therefore, as we do start to sketch out an infrastructure of public opinion where Big Data is becoming more influential, it is also important to hold in mind that this is not wholly a revolutionary development but also in continuity with other, older changes in the measurement and use of public opinion.

So what might a Big Data infrastructure of public opinion look like? One thing to note is that it is not really clear yet – we are still in the very early stages of the use of Big Data. What follows therefore is a slightly speculative attempt to start to answer this question.

Perhaps the easiest place to start is with an epistemology of public opinion and Big Data. I made a few points in my presentation. These are perhaps the most important:

  • As outlined above, Big Data approaches are correlative, meaning they are more interested in "how" than "why" questions.
  • Opinion polling is technically probabilistic in nature (hence the focus on margin of error). However, probability becomes far more important with Big Datasets, especially when the aim of the activity is prediction. As such, the very nature of the output analysis that is presented to politicians and the public might be different (Silver, 2012).
  • Big Data is integrative. In particular, Big Data techniques often seek to use multiple datasets – both structured and unstructured – from a variety of sources. This represents a dramatic shift in the kind of information that can be processed and used to construct public opinion (Mayer-Schonberger and Cukier, 2013).
  • Another important consequence of Big Datasets is that they can be more effectively sub-divided. Recent years have seen a rise in the so-called super-poll (in the UK, this technique is most famously used by Lord Ashcroft) where a sample of 25,000 is taken. The reason for this is that sub-samples can be more easily extracted from the dataset, without greatly increasing the margin of error. This would not work with a traditional 1,000 person poll. Big Data is also immune from this problem, and can very easily be organised in a way that allows for specific groups to be studied.

What though of ontology? What idea of the public might be embedded in Big Data?

  • One optimistic reading of this turn of events is that measurement of public opinion will become more conversational, rather than being simply about atomised individual opinion. This may even have the consequence of decentralising power as the tools for measuring public opinion become more accessible.
  • More pessimistically, big data techniques may alienate citizens even more from public opinion collection by harvesting unconscious expressed preferences, drawing on what has been termed “data exhaust”.
  • So this raises a question: how would this model work with classic liberal democratic ideas? If citizens are engaging in democracy but don't know they are, what does this mean? Are they really citizens anymore? Certainly the liberal idea of participation as an educative moment, which embeds an individual more in the political system would not make sense any more.

Finally, in what forums might public opinion be discussed in a Big Data infrastructure of public opinion?

  • Big data is already starting to bleed over into mainstream political journalism (as Ben and I have detailed in our work), but is still something of a novelty. As yet, it is not as respected as more traditional public opinion research methods.
  • However, it is questionable how much big data analysis citizens will get access to, and how transparent its construction will be. This is especially true if we are talking about data held in the private sector, such as social networks or health companies.
  • So this suggests a potentially interesting double standard: the public might be given access to more frivolous analysis (what Big Data says about a reality TV show, for example), while important information is held by government and corporations (how Big Data is used to influence healthcare policy, for example).
  • But it is important not to suggest that government is a singular identity. Some parts of government are clearly interested in big data, but it is not clear the legitimacy that various policy actors attribute to it (for example whether the civil service, executive, MPs or local councils have a great interest in it). What interest will MPs have in Big Data, for example? Will they take it more seriously than half a dozen letters from constituents?

These are really just some provisional ideas which I hope to fashion into something more substantial in the next few months. But any comments or questions are very welcome indeed!


Bibliography

AAPOR 2009. AAPOR Report on Online Panels, Washington D.C., American Association of Public Opinion Researchers.

ANDERSON, C. 2008. The end of theory? [Online]. Available: http://www.wired.com/science/discoveries/magazine/16-07/pb_theory [Accessed 27th June 2013].

ANSTEAD, N. & O'LOUGHLIN, B. Forthcoming. 1936 and all that: Can semantic polling dissolve the myth of two traditions of public opinion research? In: GIBSON, R. K., CANTIJOCH, M. & WARD, S. (eds.) Analyzing Social Media Data and Web Networks: New Methods for Political Science. Basingstoke, England: Palgrave Macmillan.

BLUMER, H. 1948. Public opinion and public opinion polling. American Sociological Review, 13, 542-549.

COULDRY, N. 2013. A Necessary Disenchantment: Myth, Agency and Injustice in a Digital World. Inaugural Lecture, London School of Economics and Political Science. London: LSE.

GOULD, P. 2011. The Unfinished Revolution: How New Labour Changed British Politics for Ever, London, Abacus.

GROVES, R. M. 2011. Three Eras of Survey Research. Public Opinion Quarterly, 75, 861-71.

HERBST, S. 2001. Public Opinion Infrastructures: Meanings, Measures, Media. Political Communication, 18, 451-464.

HERBST, S. & BENIGER, J. R. 1994. The changing infrastructure of public opinion. Audience making: How the media create the audience, 95-114.

MATTINSON, D. 2010. Talking to a brick wall : how New Labour stopped listening to the voter and why we need a new politics, London, Biteback.

MAYER-SCHONBERGER, V. & CUKIER, K. 2013. Big Data: A Revolution That Will Transform How We Live Work and Think, London, John Murray.

SCHIER, S. E. 2000. By invitation only : the rise of exclusive politics in the United States, Pittsburgh, Pa., University of Pittsburgh Press.

SILVER, N. 2012. The signal and the noise: Why so many predictions fail-but some don't, New York, Penguin.

A brief note on methods

I rarely do this, but it seemed worth preparing a brief methodological description of the data in my second Clegg verses Farage blog entry, which has just been published. I do this for two reasons. First, because it is good to be transparent when it comes to these kind of data. Second, because I am starting to work with some of these new techniques that allow for the analysis of bigger textual datasets. The Farage-Clegg dataset is reasonably small (approximately 30,000 words). In theory however, the same techniques could be scaled quite effectively for much larger datasets running into millions of words. So watch this space!  

Constructing the dataset

For the PSA blog post, I was interested in examining how post-debate coverage presented the performance of the two men. In order to do this, I first used Lexis Nexis to gather a sample of all British newspaper articles that made major mentions of Clegg, Farage and debate between 27th March 2014 (the day after the first debate) and 4th April 2014 (two days after the second debate). You can find the results of this search in this document. In total, it includes approximately 480 articles or 178,000 words.

Generating the tag cloud

In an attempt to have a first look at the data, I entered it into the tag cloud generation site TagCrowd. I then cleaned it, removing any words that had been artificially created by Lexis Nexis, for example.

This generates quite an astehtically pleasing tag cloud, but its usefulness for this kind of exercise is actually quite limited. Why is this? Two reasons, really. First, the tag cloud chews through the whole document. It tells us about how often words appear in the coverage, but tells us little about how these words relate to each other. Second, the impression given by the tag cloud can be quite artificial. One obvious point to make: the size of the word reflects not just the number of appearances a word makes, but also the length of the word.  

Cleaning the dataset to generate Clegg and Farage specific sentences

I particular, I wanted to examine what qualities were attributed to the two men’s performance by the media after the debates. So I now turned to two text analysis tools called QDA Miner and WordStat. In the first instance, I used QDA Miner to search for any sentences in the corpus that featured Clegg or Farage, and auto-coded them accordingly. I then exported these to Wordstat, where I could analyse the make-up of these two datasets, and most interestingly compare them.

It should be noticed that some sentences may feature twice in the dataset, as they could have featured both Clegg and Farage’s names. There is an option to exclude these double references, but since this was just quite a quick and dirty analysis, I let them appear twice.

I used Wordstat to pull out the 250 most used words. These appeared most frequently as a percentage across the whole dataset. The tables below show the calculations used to rank the words. The most distinctive Farage word is “PUTIN”. This is because it featured in 0.30 per cent of the words in the Farage dataset but only 0.18 per cent of the words in the Clegg dataset. Hence the difference was 0.12 per cent.


Shouting across each other: post-debate coverage of the Clegg and Farage broadcasts

This post was originally published on the Political Science Association's Insight blog, prior to the Clegg-Farage debates.

---

A couple of weeks ago, I blogged on the insight blog about the incentives for both the Liberal Democrats and the United Kingdom Independence Party to take part in the two-way Nick Clegg vs. Nigel Farage debates. Broadly, the argument was that – as the two parties were not really competing for the same base of voters – the debate would ultimately serve the interests of both parties.

It is too early to say yet whether that prediction is true. Post-broadcast polls following both debates suggested that, in the eyes of the audience at least, Farage had won a comfortable victory over Clegg (see here for public reaction to the first debate, and here for reaction to the second debate). Broadly, we should not be surprised that Clegg came off worse in these snap polls. As he himself knows from his 2010 debate experience, novelty is a powerful weapon, at least in the short term.

However, despite the heavy coverage they received in the immediate aftermath, the televised debates still seem to have had little impact on the polls. Most surveys conducted after debate, whether asking about potential general election voting intention or European parliament preferences, showed no real discernible change in the support level for the two parties (the one exception being a Sunday Times / YouGov poll which had the Liberal Democrats down two points, and UKIP up by five).

However, and as importantly for how politics is going to play out in the future, the debates did make very evident some of the rhetorical dividing lines that currently exist in British politics. In order to better understand this, I conducted a very quick study of post-debate media coverage.* Broadly, the sample I worked with was newspaper coverage of the debate published between 27th March 2014 (the day after the first debate) and 4th April 2014 (two days after the second debate). You can find the results of this search in this document. In total, it includes approximately 480 articles, containing some 178,000 words.

A very simple way of visualising this is a tag cloud. With this method, the size of the word equates with how frequently it occurs in the text. A tag cloud of all the newspaper articles text is shown below. This diagram tells us a few things about post-debate discussion. Certainly, questions of identity were heavily emphasised (English, Britain, British, Europe, European etc.). Additionally, what political scientists Frank Esser and Paul D’Angelo term meta-coverage seems to at the forefront of media reporting. This is when political reporters focus on who has won or lost, and the political strategies that have led to these outcomes (so in this tag cloud, words such as per cent and polls might represent meta-coverage. References to the other absent party leaders might also fit into this category). While they do feature, words like immigrations and jobs are surprisingly small.  

Figure 1: Tag cloud of 100 words most frequently used words in post-debate newspaper coverage

Figure 1: Tag cloud of 100 words most frequently used words in post-debate newspaper coverage

However, it should be noted that, as a method to understand large bodies of text, tag clouds have a number of important limitations. The first issue is a simple presentational point. The size of words reflects not only the frequency of their use, but also the length of the word (so in the above example, the size of Debate and Mr reflects a similar number of uses, even though the former is far more prominent). Additionally, tag clouds only tell us how often words were used in a piece of text, but fail to tell us much about the relationships that exist between words.

Table 1: Most distinctive words in Clegg and Farage focused sentences

Table 1: Most distinctive words in Clegg and Farage focused sentences

In order to overcome this difficult, I deployed a second method using the text analysis software package QDA Miner / Wordstat. First, I extracted every sentence in the dataset that referenced Farage or Clegg. I then had two datasets which I could compare. There are a number of things that can be done with this kind of data, but a simple and quick way of examining it is simply to look for the largest discrepancies between them i.e. words that appear a lot more in one dataset than the other. I did this for both politicians. The findings are presented in Table 1.

This offers us a few insights that are not available from the tag cloud. In particular, it suggests that there were two quite different debates going on, with Clegg and Farage talking across each other. Ironically, considering he was facing the leader of the United Kingdom Independence Party, it is Clegg who is most frequently referenced in conjunction with the EU, European, membership, and Britain. In other words, the Deputy Prime Minister’s performance in the debate does seem to have been reported through the prism of Europe. In contrast, the are two distinct strands to the coverage of Farage’s performance: first, a focus on his comments about Russia, the Ukraine and Vladimir Putin (Putin, admire, president and Russia); and second, more populist political issues (immigration, white and working).     

This divergence is interesting. Recent research, notably Robert Ford and Matthew Goodwin’s Revolt on the Right, argues that the success of UKIP has very little to do with popular feeling about the European Union, and much more to do with economic insecurity and a broader alienation from the political class. Therefore to make UKIP all about Europe – and also to try to argue against them on those terms – is never going to work. In this context even the attacks on Farage about his alleged support for Putin will likely have little impact, with voters interpreting them as being either highly abstract, or an attempt to smear the party by a combination of established politicians and the mainstream media.

It should be noted there are huge limitations to the “quick and dirty” method I have employed here. The dataset does not examine what the candidates actually said, but instead only media coverage of the debates. Sadly, there is not full transcript of the debates available at the moment. Furthermore the analysis excludes social media commentary (although the think tank Demos had an excellent go at doing some of this kind of analysis on the debate night itself). The method I have used is also relatively crude, and could be improved by either more rigorous quantitative significance testing or more qualitative human engagement with the raw data.

Nonetheless, the results do point towards something interesting. Arguably the reason that Clegg lost both the debates was not because the British public disagree with him on Europe. In fact, polling evidence would suggest a majority of them more closely identify with his position than with UKIP’s (even if they do regard Europe as being a relatively insignificant issue). In fact, Clegg seems to have lost the debates because he was perceived to be the representative of the political class against Farage’s plucky everyman. Breaking this dynamic is the real challenge for mainstream politicians.    

---

*So not to disrupt the flow of this blog post, I have produced a separate and much more detailed methodological discussion of my analysis on my own blog here. You will also find a more complete explanation of the Clegg and Farage specific word lists and how they were generated in this post.

Nick Clegg vs. Nigel Farage. Whose interest does it serve and what might it mean?

This post was originally published on the Political Science Association's Insight blog, prior to the Clegg-Farage debates.

---

The run up to the European Parliamentary elections on 22nd May will see two live debates between leader of the Liberal Democrats Nick Clegg and leader of the United Kingdom Independence Party Nigel Farage. The first debate will be broadcast on LBC Radio on 26th March, with the follow-up content appearing on BBC television on 2nd April. The original idea for the debates came when Clegg challenged Farage to a joint appearance in February. After a few days consideration, UKIP accepted the proposal, with the parties ultimately agreeing on the two debate format.

Why was the proposal made and accepted? By way of explanation, a few general points should first be made. First – and fairly obviously – politicians only ever agree to take part in live debates when they feel they have something to gain from them. However, the cost-benefit calculation is complicated by the high stakes at play in live debating. Simply put, when politicians put themselves in this situation a lot more can go wrong than can go right. It was for this reason that American political scientist Alan Schroeder called his history of American Presidential debates Fifty Years of High Risk Television.  

In practice, the politician with the greatest incentive to debate is likely to be trailing in the polls. After all, they stand to benefit from shaking the contest up with a good performance and also have little to lose in the event of a bad performance. However, since a debate requires at least two participants, the poll-leader is likely to face exactly the opposite equitation (i.e. since they are already winning they have little to gain from a good performance, while a bad performance could really undermine their chances). As such, they are likely to veto any debate proposals. This is one of the reasons why it took so long for the United Kingdom to have pre-election Prime Ministerial debates. While numerous invitations were offered over the years by the parties playing catch-up, the idea was always nixed by the party that was leading in the polls, and thus had less to gain. Similarly, while vast quantities of ink has been spilt creating the mythology of the 1960 American Presidential debates, it is worth noting that there was not a repeat performance until the Carter-Ford contest of 1976, precisely because the 1960 debate became so linked to Nixon’s defeat.        

Figure 1: Liberal Democrat poll rating vs. Others in ICM polls, General Election 2010 - March 2014

Figure 1: Liberal Democrat poll rating vs. Others in ICM polls, General Election 2010 - March 2014

The Clegg-Farage agreement to debate reflects this basic logic, at least to some extent. This is most obvious in the case of Nick Clegg. Ever since the early months of the coalition government, the Liberal Democrats poll ratings have struggled, while UKIP’s rise has regularly placed the Liberal Democrats in forth position, trailing the anti-European Union party. This is shown in Figure One, which based on ICM polling data from the start of the 2010 election until the present (the raw data for the graph is available from The Guardian. Note that the ICM dataset does not actually include the polling share for UKIP, but only the three major parties and “others”. However, the vast bulk of this group indicates support for UKIP).

UKIP too have an incentive to agree to the debate. While their poll ratings are buoyant and suggest a good performance in the European election, they still remain a fringe party in British politics. As such, they have a lot to gain from the exposure offered by prime time media coverage.

There is also an additional factor in play which may have also encouraged both parties to agree to the debate. In reality, they are not in direct competition with each other. Realistically, there are very few voters who are going to the spend the next few weeks weighing up the relative merits of a vote for the Liberal Democrat or UKIP. According to research done by YouGov for Prospect Magazine only 15 per cent of citizens saying they currently support UKIP claim to have voted Liberal Democrat at the last election. By challenging Farage so directly, Clegg seems to be trying to cast himself as the authoritative voice of British Euro-enthusiasm – the politician who is unafraid to take on the little Englander tendency. While such a position might not be very popular with many among the electorate, a full-throated attack on Farage might pull a few points back to the Liberal Democrats (especially when both David Cameron and Ed Miliband are struggling to clearly articulate their positions on Europe). Similarly, Clegg would seem to be the perfect target for Farage’s strongest rhetorical device – an attack on a self-interested political class disconnected from the concerns and values of ordinary voters. So debate might be a rare win-win scenario for both parties.

What does the broadcast of this debate mean for the future? There may be some interesting ramifications for any potential 2015 election televised debate. In 2010, the debate was restricted to the three major parties. Obviously, this approach does create certain problems in a parliamentary democracy with a complex party system. For example, nationalist parties are excluded even though they might be major parties or even parties of government in their region. Whether to include UKIP in 2015 could present an even bigger problem. On the one hand, the party will likely still have no seats in Westminster. However, it might – if it finishes top of the polls in May – have won a nationwide election, and could continue to score highly in opinion polls. It will likely also be fielding candidates across the country. At the very least, the negotiation process will be a lot more complex than the discussions in the past month, as the Conservatives and Labour will also be involved, and bring a much more complex tapestry of interests to the table. 

The public sphere, imagination and the nationalism: some thoughts

One of the wonderful things about working at the LSE are the fantastic guest speakers we are able to get to come and visit us. It really is a great privilege. Yesterday was an especially exciting day, as political theorist Professor Nancy Fraser came and spent the afternoon at a symposium organised by my colleague Professor Nick Couldry along with colleagues at Goldsmiths.

Professor Fraser is perhaps best known – among many great achievements, it should be said – for her critical perspective on Jurgen Habermas’s work. In particular, Fraser’s famously argued that Habermas’s original historically grounded construction of the public sphere articulated in The Structural Transformation of the Public Sphere (1962 / 1989 in English translation) was highly exclusive in nature, excluding many people, especially women.

The event concluded with remarks from Professor Craig Calhoun, the Director of the LSE, responding to Fraser and the comments made by other speakers over the course of the afternoon. Craig’s remarks were interesting for a number of reasons, but one particular comment he made stuck with me (in fact, to the point that I asked a question about it in the subsequent Q and A). This was about the idea of political imagination, and in particular the failure of political imagination in the public sphere. This is an interesting comment in its own right, but actually seems to echo concerns I have encountered from other quarters in different ways in the past few months. My co-author on many pieces of work, Professor Ben O’Loughlin of Royal Holloway, spent a significant proportion of his inaugural professorial lecture last year talking about the death of imagination (you can watch a video of the lecture here). Ben’s argument was slightly different – that our obsession with recording the present is undermining our ability to think about the future, so we struggle to dream of a different and better world, but many of the ramifications are the same. Similarly, Professor Justin Lewis of Cardiff University has just published a new book entitled Beyond Consumer Capitalism: Media and the Limits to Imagination, dealing with many of these themes.

All this got me thinking: what role does imagination play in contemporary politics? Well, one thing to note is that politicians and political communication does still seem to rely to a great extent on imagination, just not of the optimistic kind. Many of the most famous American political ads – such as Daisy, the infamous Willy Horton spot, or Wolves – rely on tapping into fears that voters might have about the future. But what is much harder to find is a positive visions or the articulation of alternatives. So imagination is used to promote inertia rather than alternatives.

But it does strike me that there is one place in British politics today where imagination is central to an important debate, and this is the discussion surrounding the Scottish referendum on independence. Now the idea that discussions about nationalism lend themselves to imagination is hardly news. Benedict Anderson famously argued that nationality was constructed around imagined communities. That argument is about creating a shared past retrospectively. But what nationalism also does is offers a chance to dream, to imagine a future where a different kind of society can be built. Nationalist projects also allow for radically divergent visions of future societies to be bed fellows, while differences can be effectively papered over, in a manner in which normal politics does not allow.

There are of course very substantive issues involved in the independence debate. Indeed, one reading of last week’s debate about European Union membership and currency arrangements was that the debate had suddenly been elevated to include some very practical and important aspects of the independence question. Politically, the pro-Yes campaign has tried to walk a tight rope during the campaign, arguing for the dramatic change of independence but also stressing continuity (Scotland will enter into a currency union with the UK, European Union membership is unaffected, the Queen stays as Head of State for example). The tactics of the Better Together campaign in the past few days seems to have been to push the Yes Campaign off their tight rope.

In theory, makes sound tactical political sense to try to highlight the weaknesses of your opponent’s position. But the evidence we have seems to suggest that this approach is not working, thus far at least. While making judgements with certainty based on the polling data published since the aggressive push back from the no campaign is difficult because of some methodological issues, respected psephologist Professor John Curtice has argued that, on the available evidence, this more aggressive strategy has “backfired”.

And maybe political imagination offers one explanation for this. The no campaign has long been termed (allegedly because of an internal nickname) as Project Fear.  In contrast, the yes campaign has licence to be unrelentingly positive about a new Scottish future, painted in broad and non-alienating strokes. The national project has broken the log-jam of the positive political imagination allowing people to, rightly or wrongly, conceptualise the future in a different way.

While a positive political imagination is clearly a good thing, two important observations follow from this. The first is that, while nationalism might promote optimistic visions of the future, it is still not necessarily a good thing. The classic argument against nationalism is that it is a distraction from other political projects and visions of society, as it based on exclusion rather than solidarity. The second point is perhaps more directly significant to the debate around independence in Scotland. Why is that the yes campaign have a monopoly on optimism and the future? Part of the problem is that the no campaign has completely failed to articulate a positive vision of what the United Kingdom might look like in the future, why the British project is worth continuing with and how the union might evolve to meet the concerns of those who now have doubts about it. In other words, they have had a failure of political imagination, and only seem able to offer arguments based on why an independent Scotland would be a bad thing. If Scotland does vote for independence – and it should be said that the odds remain that it will not – this might end up being one of the major reasons why.

Staging the constitution: LSE research dialogue, 21st November 2013

Last week, I spoke at the Media and Communication Department research dialogue on the subject of Image. I was a last minute addition to the programme, so decided to take the opportunity to flesh out an idea I had been pondering for a while. I was very struck a few months ago when it occurred to me that London's theatres simultaneously contained two plays that offered a take on how Britain is governed, and in particular how our institutions cope with change and crisis. At the National Theatre, This House dealt with the tumultuous politics of the mid-to-late 1970s, and the struggle between the Labour and Conservative's Whip's office as James Callaghan's majority dwindled, then vanished. On the other side of the river in the West End, Helen Mirren was reprising the role she won as Oscar for in The Queen, this time in the The Audience, a play which focused on the weekly (and highly confidential) meetings between the Monarch and Premier in Buckingham Palace.

The argument in the paper - which I outline in more depth below - is that both plays reflect classic thinking and questions on the British constitutional settlement. The Audience though offers a more Whiggish reading of the system, strongly echoing ideas espoused by the Victorian constitutionalist Walter Bagehot about the role of the dignified elements of the constitution. In contrast, This House is more ambiguous its message, but engages with the debate - most famously articulated by Edmund Burke in 1790 - between government based on human nature and government based on human rationality. While the play text articulates arguments for both positions, my reading is that it ultimately highlights the weaknesses of government based human nature, and thus offers a space for opposing the Victorian constitution fetishised in The Audience.

You can listen to a podcast of my talk below, or alternatively watch the video which has audio and slides. A PDF of the slides is also available here.

The first thing to say is that is that I do not think it is a coincidence that these plays have been so successful, both critically and in terms of drawing an audience, at this moment in time. If one thinks of the Scottish Independence debate, the potential EU-exit referendum, the failure of the electoral system to create governing majorities, and the fracturing of the party system most evident in the rise of UKIP, it quickly becomes clear that the British political system is in a state of extreme flux. Constitutional scholar Anthony King has recently gone as far as to talk of the British constitution in its current state as being "a mess" and it is hard to disagree with him. Couple this with political institutions' inability to cope with the financial crisis, and it is unsurprising that people are looking back to the economic and political dislocation of the 1970s with interest.

So what do these plays attempt to tell us or ask about our political institutions? The first thing to note is that they both share a trick in common - they take us to places that we are not normally permitted to enter, either the party Whip's office or the audience between Queen and Prime Minister. This actually draws on a long-established idea that the British constitution has its secret elements. In his English Constitution, Bagehot talks a lot about the secrets and mystery of the constitution, while more recently scholar Peter Hennesey wrote a book on what he termed the “hidden wiring” of the British system.

But our role as the audience is slightly different in the two plays. In the original run of This House in the Cotteslow Theatre at the NT, the whole auditorium was rebuilt as a replica House of Commons. Audience members were sitting on the green benches and even interacting with the cast. As such, they complicit in the processes ongoing in the play. In contrast, in The Audience, the audience is positioned much more as an intruder, and possibly even an unwelcome one, a point made clear when the young Princess Elizabeth (who appears in a spectre-like fashion at various points during the play to interact with her older self) appears to look towards then audience and then recoils with a fear of being seen. This difference would suggest that the plays have quite a different attitude to hierarchy and social ordering.

Perhaps the clearest indication of constitutional doctrine is found in the conclusion of The Audience though, in a monologue delivered by Elizabeth.

“No matter how old-fashioned, expensive or unjustifiable we are, we will still be preferable to a elected president meddling in what they [Prime Ministers] do. Which is why they always dive into rescue us every time we make a mess of things. If you want to know how it is that the monarchy in this country has survived as long as it has – don’t look to its monarchs, look to its Prime Ministers” (Morgan, 2013: 88).

This directly echoes Bagehot's claim that the purpose of monarchy and other ceremonial aspects of the constitution is to act as a disguise for the real business of politics and, as such, it serves a useful function for the political class, who have a vested interest in preserving it. Thus the way through crisis presented in The Audience is essentially conservative: it relies on service, order and long-established precedents.

In contrast, This House offers a far more ambiguous reading of the constitutional settlement. It enters into a debate that has been going on a very long-time, perhaps most famously articulated by Edmund Burke who argued that constitutions must be based “not on human reason, but on human nature” (1790). At it heart, this debate comes down to the question of whether constitutions can be designed (in other words, be a product of reason) or whether they should be arrived at through shared memory, experience and values (and thus be the product of human nature).

This House acknowledges the Burkean tradition, with it being noted that the origins of various practices - such as pairing the House of Commons - are not really understood. The problem though with a system of government based on human nature is that human beings are very frail, a point illustrated by the gradual wasting away of Callaghan's slim majority between 1976 and 1979. The system is so reliant on its human parts that this sets off a form of contagious rot within the whole body politic, reflected in various metaphors about the Thames being diseased and the (historically accurate) breakdown of Parliament's clock tower containing Big Ben in 1976.

This House asks us to empathise with MPs. In the post-expenses scandal world, this certainly seems like quite an unusual thing to to do. But far more importantly, This House seems to question a fundamental idea embedded in British constitutional thinking - namely, that shared values and established practices are, by themselves, enough to get through any period of crisis? As such, it is rather different to the far more conservative The Audience, and certainly a play for our times, as much as a play about an important period of political history.

Some further reactions to George Lakoff at the LSE

I live blogged Professor Lakoff's discussion at the LSE yesterday. There can be no doubting the importance of his body of work, and the huge influence it has had on politics generally and American politics in particular. Certainly, the study of metaphors and their seeming power poses a huge challenge to more rational perspectives on political life and debate, and I mean that both in the Antony Downs and Jurgen Habermas sense of the term rational.

Indeed, for me Professor Lakoff's view of emotion in politics was perhaps the most striking idea he offered last night, in that it amounted to almost a post-revisionist perspective on the relationship between rationality and emotion. As you can see from the liveblog, Lakoff was highly critical of enlightenment views of rationality. This is not a wholly unique perspective. Many scholars focusing on deliberation (such as John Dryzek, for example) have argued that an overly prescriptive definition of "good" deliberation, which excludes emotions such as anger and humour is not very helpful. But where Lakoff took this a step further was in drawing on research from the field of neuro-science, and in particular arguing that because of the way the human brain is wired, the distinction between rationality and emotion is false. Put another way: if you take away people's emotion, they do not become wholly rational. In reality, rationality and emotion are wholly symbiotic. This is a very challenging insight for political scientist used to arguing about the relative merits of rational and emotional debate.

I was left with more questions on the relationship between metaphor and ideology. Perhaps Professor Lakoff's most famous idea is derived from two models of the family and how they relate to political world views. There is the nurturing family, where the assumption is that parents are equals and seek to bring out the best traits in their offspring, who they assume to be inherently good. This view is equated with progressive and liberal thought. Alternatively, there is the family model based on the strong and domineering father-figure, who commands his children, assuming them to be unruly and misguided. Only if they follow his instructions they can then be reformed. If they do not, they are guilty of a moral failing and the family's moral responsibility ceases. This metaphor is associated with a conservative political worldview.

But there is a great tension in these metaphors and their political ramifications, I felt. On the one hand, Professor Lakoff was keen to stress the permanent and geographically non-limited spread of metaphors (the examples given in the lecture were the link between increase and up, and affection and warmth). The reason is that metaphors are grounded in lived experiences, constantly creating and solidifying those neural-networks. In contrast though, the ideological consequences of the family metaphors are clearly grounded in the recent American experience of the past thirty years or so. These metaphors become far more problematic if we consider different strands of conservative thought, for example. How, for instance, would we think of Bismark? A stern father-figure, certainly, but also the founder of the modern welfare state model. Harold Macmillan presents another interesting challenge, as his ideology was the very model of conservative paternalism, but has no relationship to harshness of the contemporary US right. Even Richard Nixon, who might be regarded as the founder of the modern US conservative movement is a problematic figure. His administration fits well with the model in someways, but also attempted to expand healthcare greatly.

This leads to a broader question about the family metaphor and ideology: what is in the service of what? Put another way, does the metaphor shape the ideology, or does the ideology employ the metaphor, or are both these processes occurring at once?

The two David Camerons

It has taken a long time to get here, but at last David Cameron has delivered his big Europe speech. Judging by the generally broad grins of Tory Eurosceptics being wheeled out on rolling news channels, he at least seems to have been successful in appeasing elements of his party. Whether Cameron's strategy is ultimately successful though - and how it will influence his page in the history books - is an entirely different matter.

It has now become very apparent that there are two very distinct David Camerons. What is interesting about today's events is that both of them were prominently on display. The first is an idealist, a man who sees himself as the visionary leader of a one nation party rooted in the liberal-conservative centre of British politics. This version of Cameron thrives on big gesture politics, and has been most evident during the 2006 leadership campaign, and then again when making the coalition offer to the Liberal Democrats in 2010. In contrast, the second David Cameron is more instinctively conservative, risk-averse and focused on relatively short term electoral and partisan calculations. This version of David Cameron has perhaps been most evident in his dealings with his own party.

That both David Camerons played a role today is evident in the rather prescient comment made in The Guardian liveblog on the speech that while this was probably the most Eurosceptic speech ever made by a British Prime Minister, it was also probably the most pro-European speech made by David Cameron. And while short-term electoral and partisan calculation are clearly involved in what Cameron has argued, there is also a more idealistic idea being articulated as well - namely the desire to win an in-out referendum (after a successful renegotiation process, obviously), in the process laying to rest the most virulent strand of Tory Euroscepticism that has dogged Conservative leaders for a quarter of a century, and settling the European question for at least a generation.

What is already very clear though is that this is a massive gamble for Cameron and for Britain. European leaders and foreign ministers are queuing up to say that what Cameron suggested today is unacceptable. One interesting insight into the potential pitfalls faced by Cameron now is found in David Marquand's Britain Since 1918, an excellent history of the British constitution that I have just finished reading. Marquand's arguments are instructive on two levels as to how this situation might develop.

First, Marquand notes that British diplomacy has traditionally failed in Europe because it has not appreciated the weakness of its position. This goes back as far as the original European Coal and Steel Community, and Britain's refusal to join, followed by subsequent attempts to join the EEC under Macmillan. The error made here was to attempt negotiations from ground up, with Britain's equal status assumed. This neglected the fact that other members of the EEC had already been through an extended process of negotiation while Britain stood aloof. It is no coincidence Marquand argues that the British application was only successful when Heath took a new approach - broadly accepting the rules of the club as fixed in order to win admission.

But Marquand also offers a possible course of action for Cameron. After all, we have been here before. Prior to the two election of 1974, Harold Wilson's Labour Party promised a renegotiation of Britain's membership of the EEC if elected, followed by a referendum on the outcome. Much like Cameron, Wilson claimed to be pro-European, and took this stance to appease the ideologues in his own party. In practice though, Wilson's renegotiation amended tiny details of the Britain's relationship with the EEC (tariff exceptions for the import of New Zealand butter and suchlike), yet was haled as a massive coup by Wilson. Whether Cameron could pull a similar trick is open to question, but this might be one possible course of action open to the Conservatives after a 2015 election.

But this is a dangerous game of political brinkmanship. And indeed, this may prove to be the supreme irony of Cameron's premiership. Writing about recent history, Marquand reminds us just what a constitutionally radical government Labour offered, especially between 1997 and 2001, when devolution occurred and House of Lords reform took place. This constitutional reform was far from perfect; indeed, in some ways it was downright flawed - the West Lothian question rumbles on, unaddressed, and House of Lords reform remains a half finished work-in-progress. Yet, in broad terms, Labour undoubtedly achieved what it set out to do.

Labour were followed in office by Cameron, the self-confessed constitutional conservative. Yet, thanks to one referendum, he might be the unionist who oversees the dissolution of the union, and thanks to another referendum, he might be the premier who, while doubtless a sceptic, claims to be in favour of Britain's membership of the European Union, but leads Britain out of the EU. It is one thing to achieve constitutional goals with and leave some mess behind, as Labour did. It is quite another to make a mark on the history of the British constitution wholly through unintended consequences.

Imagining the internet

My liveblog of my colleague Robin Mansell's lecture on her new book Imagining the Internet, published by OUP.

View the story "Imagining the Internet, 16 October 2012" on Storify