Tuesday, 24 March 2015

The Sophomore Surge

I'm a regular reader of Peter Brent (@mumbletwits on Twitter) and his blog posts in The Australian newspaper. He talks regularly about politics and, in particular, psephology, which is the study of elections and trends in voting. One aspect of Mumble's writing I particularly like is that he remains, in his writing, non-partisan, something more opinion writers and journalists could learn to do.  Maybe it's just that all politicians give him the shits.

Sophomore Surge

One particular idea that Brent discusses regularly in analysing election results is the Sophomore Surge.  In summary, the Sophomore Surge encapsulates the idea that a member newly elected at one election gains an advantage at the next election due to the fact that, in the meantime, they gather a personal following of people who vote for them not due to their party, but due to the relationship they build up with individual voters with whom they interact and work.  This tends to be a one-off effect for any parliamentarian.  Let's face it, if you haven't warmed to your local member after the first three years, you're unlikely to do so later.  There's a bit more to it than that, but you can read more about it from Brent directly here.

Not all psephologists agree that this concept has any significant bearing on election outcomes, with one particularly ardent critic being Possum Commitatus (@pollytics on Twitter).  His opinion generally goes along the lines below
Figure 1: Tweet by @pollytics
Now, while I can't answer his last question (I suspect it may actually be rhetorical), I do have the capacity to use statistics above a year 10 level to analyse election results, and determine whether, statistically speaking, the Sophomore Surge has a significant effect on election outcomes for those candidate.  As a maths teacher, I hope I can also explain it well for the layperson & laypossums (laypossa?).

Don't be scared by the maths

The maths is actually at a Year 12 level, and I'll present the whole argument below logically so anyone, regardless of ability with a pencil, should be able to follow along.  The actual tricky calculations that this task requires can all be done using online calculators, so mostly the sort of maths you need to follow this is logic.  If you're still reading at this point, you'll be fine with the maths.  If you're starting to think tl;dr, maybe this site would be more interesting.

Setting up the problem - the data

The data I have chosen to use in the analysis below is from the last Federal Election, held in September 2013.  Why?  It's fairly recent, and the data is easy to get.  I did start looking at the Victorian State election, but there have been a significant number of redistributions since the previous election, making it difficult to really identify any true Sophomore candidates.  There were very few changes in electoral boundaries between the 2007 and 2013 federal elections that had an impact on our Sophomore candidates.  There are are also more seats federally, so a larger data set.  (As it turns out, this data set is still not large enough, but will do for the purpose of explaining how this process works - other, more serious, psephologists may wish to attack this problem with bigger data sets if they wish).

Setting up the problem - definitions

We also need to define which candidates we're interested in, so we're all on the same page.
We're interested only in politicians facing their second consecutive election.  So those elected for the first time in 2010, and faced re-election in 2013 for the first time.  It doesn't matter whether they won in 2013 or not.  These are the Sophomore candidates for 2013.
What we want to test is whether their new incumbency gives them a better result than the general election results across the whole country.

The way I'm going to measure this is to compare the swing each Sophomore candidate had at the 2013 election to the overall national swing.  Now, in 2013, the national swing was 3.65% towards the Coalition, so for a Sophomore candidate from a Coalition Party (Libs, Nats, LNP, CLP), a swing of greater than 3.65% would count as doing better, while a swing of 3.65% or less would result in not doing better.  For an ALP Sophomore candidate, it would be the opposite (ie swing of less than 3.65% towards the Coalition is good, 3.65% or more towards the Coalition is bad).

Binomial Statistics - simple as flipping a coin

Measuring the outcome in terms of only two possible results (better than average or not better than average) is an important part of the problem.  It allows us to treat this as a binomial problem (bi meaning two). A bit like flipping a coin - the result is either heads or tails. 

If you flip a coin 20 times, the most likely result will be 10 heads and 10 tails, but lets be honest, we wouldn't be surprised if it was 11 to 9, or even 12 to 8.  What about 13 to 7? 15 to 5?  At what point do we start getting suspicious that the coin is rigged?

Fortunately, binomial statistics allows us to calculate exactly how likely a specific outcome is, and there are plenty of online calculators that do the hard work for us  We just have to know how to put the data in.  All you need is three values.
  • The number of trials, n (how many coin tosses)
  • The probability of success as a fraction of 1, p (in the case of a fair coin, 1/2 or 0.5)
  • The number of successes, k (how meany heads)
Using the calculator linked above, you can see that the chance of getting exactly ten heads in twenty tosses (written as P(X=10)) is about 17.6% (multiply the Binomial Probability decimal value of 0.1716... from the table below by 100 to turn it into a percentage).  We'll look at the Cumulative Probabilities and what they mean a little later

Here's how it looks on the website
Figure 2: Probability of flipping 20 coins and getting 10 heads

If you graph the probability of each possible outcome from zero heads to twenty heads out of twenty tosses, and you get the well known bell curve below (figure 3, calculated from a different online calculator).  You can see the most likely outcome is 10 heads, although 9 or 11 would not be unusual.

Figure 3: Binomial distribution for flipping a coin 20 times
Using the same online calculator, we can calculate that the probability of getting 12 heads in 20 coin flips.
By looking at the Binomial Probability P(X=12), we can see that probability of getting 12 heads in 20 flips is approximately 12%.

Testing a Hypothesis

The normal way mathematicians do this is to set up a hypothesis that can be tested.  In this case, we could define the hypothesis H0 as The Coin is Fair, and the alternative hypothesis H1 as The Coin is Not Fair.  We then look at the data to determine the probability of getting our result or something even more unusual, assuming that H0 is in fact true.  If it is less than some arbitrary value, often 5%, then we say that is unlikely that our hypothesis, H0, is true, and therefore H1 must be true instead.

Testing our coin hypothesis, the probability of getting 12 heads or more, is identified in the above table by the symbols P(X>=12), which has a value of approximately 25%.  What this means is that we could expect to get 12 or more heads in 20 coin flips, approximately 25% of the time.  This is reasonably likely, or at least higher than our 5% standard, and so we have to assume the H0 is true, that is, that the coin is fair.

(If you play around with the online calculator you can easily see that in order to get the Cumulative Probability down below 5%, you would need to get 15 heads or more in 20 flips)

Counter intuitively, these results  don't scale.  What this means is that while 12 heads out of 20 is consistent with a fair coin, 120 heads out of 200 isn't.  In fact the probability of getting 120 or more heads out of 200 flips is less than 0.3%, well below our standard of 5%.

The Actual Data

In 2013 there were 26 candidates who won in 2010 as new candidates and ran for relection.  Of these 26, I am going to exclude six as not being simple cases.  These six are
  • Warren Entsch in Leichardt, Qld
  • Ross Vast in Bonner, Qld
  • Teresa Gambaro in Brisbane, Qld
  • Rob Mitchell in McEwan, Vic
  • Louise Markus in Macquarie, NSW
  • Laurie Ferguson in Werriwa, NSW
The reason for excluding the first three is that they were all members previous to 2007, so that by and large the conditions for a Sophomore Surge won't necessarily apply, ie, the electorate already knows them.  Rob Mitchell had never been the local member, but had been the candidate several times, making it difficult to determine if the Sophomore effect would be important or not.  The last two were affected by redistributions between 2010 and 2013, so I excluded them as well.  They may get half a Sophomore Surge. Best to leave them out.

This leaves us with 20 ridgy-didge Sophomore candidates to play around with. (I've lost the original table, but can redo if anyone is particularly interested)
Of the 20 candidates, 12 had a swing that was better than the national average. If there is no Sophomore Surge, then we would expect that there was a 50% chance that any particular candidate would receive an above average swing, and 50% chance they wouldn't, and the most likely outcome would be 10 with a better than average result, and 10 less.  But is 12 with an above average swing really that unusual or unexpected?

Funnily enough(!), the maths here works out to be exactly the same as in our example of a coin above.  Of the 20 Sophomore candidates (n=20), 12 gained swings better than the national average (k=12), where it is expected that there is a 50% chance (p=0.5) of an above swing.  As in the above example, let's set our Hypothesis, H0, to be There is no Sophomore Effect, with the alternate hypothesis, H1, The Sophomore Effect Holds.

Assuming that H0 is true, there is no Sophomore effect, then there is a 25% probability that 12  or more of our Sophomore candidates would receive better than average swings.  As this is a reasonably high probability (higher than 5%), our data indicates that H0, There is no Sophomore Surge, is an acceptable explanation for the result given.

More data needed!

As we saw above, though, more data can easily give a better picture (and a sample of 20 is really quite small).  The technique used here can easily be applied to data aggregated across a large number of elections.  The process to do so is remarkably easy.
  • For n, add the total number of Sophomore Candidates across a set of elections
  • For k, add the total number of Sophomore Candidates who achieved a better than average swing in the relevant election
  • For p, always use 0.5%

Hasn't anyone already thought of doing this?

Well, yes.  A quick google search on the term Sophomore Surge will provide links to a number of  research papers like this one, using much more sophisticated maths than I have, that tend to support the existence and significance of the Sophomore Surge

PS this blog was mostly written before the Qld state election, and was intended to be published before then.  I got busy :)  With the NSW state election coming up this weekend, it seemed appropriate to publish it now.  Perhaps after this weekend, I'll gather data form these two state elections and beef up the stats.

Sunday, 1 February 2015

What chance for an LNP government in Qld?

As of end of counting Election Night, there were still about 600,000 pre-poll votes to count, out of a total of 2.7million (no reference - just something I heard a couple of times).  That's more than 22% still to go, plus any postal votes not yet in (as of 8pm Sunday night, ABC states that 75.1% are counted)  Is that significant?  Could they make a difference to the overall outcome?

And while the result has been awful for the LNP relative to their last outing, the actual voting so far is pretty even, effectively a 2PP of 50%.  In theory, this should mean that either party should be in with a chance to form government, given the support of the independents.  In reality, all the independents took a No Asset Sales position into the election, and so it seems unlikely that they would support the LNP without some serious policy backflips (which, from politicians, we couldn't rule out).

Assumptions

All that aside, I'm a little curious as to what chance the LNP still have of forming government in their own right.  So let's have a bit of a look at it mathematically.  First, some assumptions.
  • No independent supports the LNP, so they need 45 seats in their own right.
  • The LNP keeps any seat in which they currently lead (might look at this later)
  • To win each extra seat in which the LNP is currently just behind, LNP needs 50.01% of the total vote (votes already counted plus pre poll votes)
  • Each electorate is treated as a 2 candidate competition, so the voter's choice is simply LNP or ALP.  This is effectively the case in most seats as the 2PP is LNP v ALP in almost all cases.
  • There are still 6,742 (600,000 divided by 89 seats) votes still to count in each electorate, which could make a difference to the outcome. This is probably the weakest assumption as this will vary from seat to seat, but will do for a first attempt.
 Again, as of 8pm tonight, the ABC election website gives the LNP 39 seats, and the ALP 43.  So this means that the LNP need another 6 extra seats in their own right in order to keep government. 

Methodology

What I intend to do is to look at the 6 most marginal seats currently trending to the ALP, and determine the likelihood of the remaining 6,742 flowing adequately to the LNP to allow them to win each of those seats, given the assumptions above.

The Maths

I'm going to use a little bit of binomial statistics to determine the likelihood of the LNP getting enough votes in the prepolls to make a difference.  I'll explain how this works in more detail in another post, but binomial statistics is really just high school stats.  Nothing too fancy.  In fact, we can use an online calculator to do the grunt work for us.  We just need to know three values first.

  • The first is n, the number of trials in each test.  In this case, we'll make n = 6,742, the number of extra votes that we're going to count.
  • The second is p, the likelihood of success in each trial, or in this case, the likelihood of a vote for the LNP.  In our assumption we stated that we need a success of 50.01%, or in decimal, p = 0.5001
  • The third is x, the number of successes that we are looking for for an LNP win.  This will vary from seat to seat.  I will calculate this as 3,362 (half of the remaining 6742 votes) plus the number of votes the LNP is behind.
Plugging these three numbers into the online calculator mentioned above will give us a calculation for the likelihood of the LNP winning the seat from the current situation.

The Seats

It shouldn't come as a surprise that there are actually quite a few seats currently given to the ALP that are reasonably marginal.  I've also calculated the Vote Deficit, ie, how many extra votes the LNP candidate would need on a 2PP basis to overtake the ALP candidate. The eight most marginal seats are, in order

Seat                  Margin (to ALP)    Vote Deficit (from ABC election website)
Ferny Grove     1.3%                           578
Springwood      1.6%                           785
Bundaberg        1.9%                           816
Pumicestone     2.0%                          1063
Mt Coo-tha       2.9%                          1149
Mundingburra   2.9%                         1170
Maryborough    3.0%                         1200
Barron River     3.2%                         1757

Throwing the n, p & x values for each seat at the online calculator gives

Seat                  x (3362+Vote Deficit)   Cumulative Probability (P (X >=x))
Ferny Grove     3940                              <0.000001, or < 0.0001%

Obviously for the other seats there is less chance than this.  So unless there is some mechanism that makes pre-poll voters more likely to vote LNP than other voters, then it's pretty clear the LNP is unlikely to get any more seats .

Using the same mathematics, you can show that Whitsundays is now pretty safe for the LNP, even with a lead of only 84 votes.  With the votes left to count, there is only a 5% chance that it could swing to the ALP.

Monday, 7 July 2014

What the research says - Plain Packaging

Below is a short list of the current research (from 2014) on the success or otherwise of Plain Packaging of tobacco in Australia.  I do not intend in this blog to make any sort of value judgements or analysis of the articles below, other than to include only reports that are publicly available for all to read.  The data used in each report is not generally publicly available, but the methods by which that data is obtained is described in each case.

The order in which they are listed is simply the order in which I have come across the reports.  In some cases, reports were found through references form previous reports.  If you should come across other reports that you feel should validly be included here, then feel free to let me know in the comments below.

Ashok Kaul and Michael Wolf (2014), The (Possible) Effect of Plain Packaging on the Smoking Prevalence of Minors in Australia: A Trend Analysis

Ashok Kaul and Michael Wolf (2014), The (Possible) Effect of Plain Packaging on Smoking Prevalence in Australia: A Trend Analysis

Laverty, Watt, Arnott & Hopkinson (2014)Standardised packaging and tobacco-industry-funded research

Chantler (2014),  Plain Packaging Review :Independent Review into standardised packaging of tobacco

Jane M Young, Ingrid Stacey, Timothy A Dobbins, Sally Dunlop, Anita L Dessaix and David C Currow(2014), Association between tobacco plain packaging and Quitline calls: a population-based, interrupted time-series analysis

Tuesday, 17 June 2014

Khaki is the New Black

There's been a lot of chatter on twitter and the net in the past week regarding the effectiveness (or not) of Plain Packaging for cigarettes in Australia.  As best as I can tell, it started with the publication of a report in the Australian by Christian Kerr purporting to show that not only was plain packaging ineffective in reducing smoking rates, they had in fact gone up!  The report claims an increase in 0.3% in the number of cigarettes sold in 2013 compared to 2012 (when PP started).  Not much, and certainly less than population growth, but this is against a slide of 15.6% of quantity sales over the previous four years

Whoops.  Is it possible that plain packaging has actually reversed the long term trend in reduced smoking rates?

Of course, one of the main source for the article is an unreleased report from InfoView, compiled on behalf of the tobacco industry, which makes it a little hard to verify.  There are other (unreferenced) sources listed as well, although most of them could be said to have an interest in continued healthy sales of tobacco, so what are we to make of all this?

The report was jumped on straight away by "Australia's Leading Economist & Speaker" (his own words), Stephen Koukoulas in a blog entry in which he explains that there is no possible way in which cigarette sales have gone up since plain packaging was introduced.  To back this up, he wasn't using dodgy, unpublished statistics from the tobacco industry. No.  Nothing short of the ABS (Australian Bureau of Statistics) for him.  Unimpeachable.  And he's right on that score.  The ABS cannot be said to be carrying a torch for anyone - impartial and independent.

Here's his helpful tweet referencing the document (he doesn't reference it in his blog), and here's a link to the relevant table from the ABS.  (The actual page linked to is here)
It's actually row 12 in the Excel spreadsheet you want.  Click the link in row 12 and it takes you to a set of data (in Column C) that measures 'volume' of tobacco sales in Australia quarterly from September 1959 to the present.  You can see that it rises steadily to peak in the early 80s, and then starts to drop off again.

Sems an open and shut case.  Tobacco volumes are down!  An Stephen Koukoulas self-righteously jeers at The Australian for being so transparent in just making up shit to make the Gillard government look bad (it might be worth pointing out here that The Kouk was, at one stage, the economic advisor to PM Gillard).

At some point, Judith Sloan from the Australian weighs in, TheKouk then has a go at her, Jack the Insider has his say, and even the dearly departed Dr Craig Emerson, PhD (Economics) has to put his two cents in (on twitter, 16 June 2014)



Except the ABS data doesn't show actually quantities of tobacco sold.  It uses a measure called Chain Volume Measures (the ABS explains it here - how's your maths?).

This is where the analysis gets interesting.  

Chain Volume attempts to measure the volume of a good (where volume is defined as price times quantity) bought by Australians each year, taking out distorting factors like inflation or CPI, changes in price from year to year and so on.  For most products, where the price for all individual items in a category roughly rise or fall at the same rate, Chain Volume does a pretty good job of measuring 'Volume' of a product sold in a quarter.  The ABS article shows a series of calculations, and I have recreated Table 5 in Excel*, using the formulae in the article linked above.

It clearly shows that the Chain Value Estimate increasing as time goes on, as both quantity sold and price increase.  This is a pretty normal situation for most goods in an economy that is growing.  Note in the above table that there is a clear preference for Apples over Oranges, probably because they're cheaper.

But cigarettes are different.  Brand is everything!  People tend to buy not so much on price, but on 'flavour' and other unquantifiable properties.  Cheap cigarettes sell in fairly low quantities (ok, I haven't got a reference for this, but hang in there.)  Let's be honest, this argument was a big selling point for Plain Packaging in the first place. Take away the brand and people won't want to keep smoking.  Apparently cigarettes in Plain Packaging taste worse anyway.

But what if instead of giving up or reducing, they switch to cheaper ciggies?  I mean, if there's no Brand allegiance, and they're going to taste worse anyway, why not save a few quid?  And with a price saving of up to $7 a pack for the cheapies, what's to lose?

So, I fiddled with the Excel table above, and plugged in some values** modelling two cigarette brands, where the premium brand had the most sales, despite quite a large price difference in Period 0.  Suddenly in Period 2, this switches over, with the bulk of purchases now coming form the cheaper brand.  I have also built in first a decrease of total quantity sold from Period 0 to Period 1, and then a slight overall increase, in line with the data from Christian Kerr and Jack the Insider.  It's interesting to see the results. (I've also extended the model by an extra Period)

The model shows, as per the ABS data, a drop in the Chain Value Estimate (in yellow) across all three years, while at the same time, an increase in actual consumption (in green).

Now, I'm not suggesting that this is actually what has happened, merely showing that there are circumstances in which this can happen.  However, with various bits of data around the country showing increases in quantities sold, increases in the number of smokers, and a decrease in the Chain Value (ie money spent on cigarettes), this model would go a long way to explaining all the disparate and seemingly contradictory data surrounding Plain Packaging and smoking rates.

In Summary

  • Smoking rates have been dropping for a long time, primarily due to price signals and public health campaigns.
  • Plain Packaging was introduced in Dec 2012 in order to reduce smoking rates further.
  • There appears to be some evidence that since PP was introduced, smoking rates have not fallen at the same rate as previously, and may even have increased slightly, despite Australians spending less on tobacco.
It seems at this point in time, the best case scenario for PP is that it has had no effect on smoking rates.  We really need more time and data to tell for sure.






*Here's a screen shot showing the formulae I used, for anyone that wants to play around with it or verify my maths.


**The values have not come from any research on actual sales in Australia.  They were chosen to model a situation where the price profile of a group of products made a significant and unusual change, to show a weakness in the Chain Volume Measure model used by the ABS.  I believe that the way I have used the model reflects the behaviour of smokers following the introduction of Plain Packaging.

Sunday, 1 June 2014

Are you worried about Fukushima?

I was asked a question on twitter today by a rare breed - an articulate tweeter, interested in actual debate.

The question was in relation to alternate energy, after I had suggested that Nuclear power as a more environmentally friendly alternative to both fossil fuels and off-grid solar PV cells.  The question was:

You're concerned about battery leaks with solar yet ok with nuclear?

And my response was absolutely! Let's think about the environmental risk posed by off-grid storage in term of the Fukushima nuclear disaster. Obviously Fukushima was, and remains, an environmental catastrophe. There is no denying that the effects of the breakdown can be measured across the breadth of the Pacific Ocean, and radiation levels remain high in Fukushima itself.

But lets run a hypothetical - what if, instead of the nuclear power plants at Fukushima, we had a distributed network of off-grid, PV cells. Off grid, because the suggestion was to use batteries as storage to allow for base-load delivery. Yes, there are other ways to store power for base-load, but the question related to batteries, so bear with me.

So lets do some sums.

The Fukushima power plants were rated at a total of 6.7GW of power. Let's assume that this was replaced with the equivalent amount of solar cells on houses, plus battery storage to provide for a week's energy (for times when there is significant cloud cover and so on). To provide 6.7GW of power for a week, a total of 6.7x10^9 x 24hours/day x 7days Wh of energy, or 66x10^9Ah of battery storage.

In our distributed network, this could be provided by a number of 110Ah batteries in each residence, 600 million batteries in total! The particular batteries I've used for this calculation each contain approximately 28kg of lead, and 7L of acid (there's a little bit of guestimation from me on this).

In total then, our earthquake and tsunami would have left 16.8 millions tonnes of waste lead, and 4,200 million litres of acid potentially spread over the land and in the ocean. These, too are catastrophic figures, and would represent a pretty severe environmental disaster in their own right.

Is it possible to figure out which one is worse? The radioactive waste, or the massive amounts of lead or acid? Possibly. Both would certainly leave long-lasting environmental impacts for years to come, and require a mammoth clean up effort to make the land liveable again. The analysis of which one is 'better' could take considerable time.

One point to take from this though is that no form of energy is really clean and safe. There are risks for any source of energy. The above event is a catastrophic event, but imagine a distributed installation of 600 million lead-acid batteries, with the responsibility for maintenance being shared by millions of people. There is bound to be some leakage, irresponsible disposal and environmental damage somewhere. By comparison, the nuclear power station is maintained by a team of highly trained people with government oversight. What is better - a low likelihood of a huge catastrophe, or almost certain localised environmental degradation that goes unnoticed and unmeasured?

Rather than decide "Nuclear is bad' or 'Nuclear will save us', perhaps a more sober analysis and understanding of the risks of each type of energy is a more realistic sort of debate to have. All energy sources have their risks, all will have an impact on the environment. Which will have the least, and which can be most effectively managed, and how is it managed, are much better questions to ask.

Saturday, 4 May 2013

Bliss is a $10k Mortgage

There's been a lot of chatter on twitter recently about net government debt being quite small internationally, and at 10%, that is indeed correct.

Judtith Sloan is keen to point out that it's not just the quantity of debt that matters, but the quality.  That is, what that debt has bought, which is also correct (think credit card debt vs a mortgage).  I won't go in to that debate here.

While the debt debate continues, a much more interesting meme has appeared - the comaprison of government debt to an average householders mortgage, and it is that that I would like to dig in to a bit, as it seems to have gained more traction than either of the ideas above.

Here's a typical example of an infographic showing this (courtesy of the ACTU)
 
 
As far as analogies are concerned, its got some real flaws.  Let's dig in to it.
 

1. Net Debt vs Gross Debt

The most obvious thing here is that someone who has a $10,000 mortgage also owns a house. That is, their assets are going to be much larger than their debts, by a significant amount. The government debt is net debt, that is, it is not offset by assets. Perhaps a better analogy would be the fellow on $100k a year with a $10k credit card debt (but see below).
 
On top of that, the government isn't even spending within their income.  A further improvement of the analogy would be the fellow on $100k, with a $10k credit card debt, and spends more than he earns each month.  Even still, I'm not happy with this analogy as the government is not a person.

2. Government Income

GDP is NOT the government income.  It is the estimated total value of goods and services produced by the nation over a year.  The government's income is much less than that, something in the order of 24% of GDP.  A better comparison for our debt, then, would be10% of (24% of GDP), which is closer to 58% of the government's income each year.
 

3. Household Income

There's a bit of a fudge here, too.  Unlike the government, the average household has to pay tax, which reduces the money available to pay back loans, living expenses etc.  It's hard to determine exactly how much tax without knowing the mix of incomes that makes up the $100k, but a single person earning $100k, would pay approximately $26,500 in income tax and Medicare levy, leaving $73,500.
 

A Nice Little Mortgage

Still, a mortgage equal to 58% of $73,500, (about $43,000) would be the envy of most people.  But is even this anolgy reasonable?
 
The underlying assumption in the argument presented by the ACTU (and others) is that mortgages and government debt can actually be compared on a like for like basis.  How well does that assumption stack up?
 

An Average Mortgage

The typical new Aussie household mortgage these days can be anything up to 7 times the (pre-tax) income of a household, historically quite a large mortgage, and can be difficult to manage.  Let's consider a more typical 3.5 times (pre-tax) income.
 
By implying that a mortgage and government debt are comparable, those making the comparison are implying that a net government debt of 3.5 times the government's income of 24% of GDP is quite an acceptable level.
 
Putting this calculation in terms of debt as a % of GDP, we come to 84% of GDP, close to the net public debt of the UK government, and more than that of the US government!
 
So, we know what the ACTU's thoughts are on this, happy to misinform for purely political reasons.
 
What about the thoughts of a senior govenrment minister , who also happens to have a PhD in Economics?  Surely we could trust them to be more honest, yes?
 
 
No.


Saturday, 16 March 2013

Don't forget that power-hungry copper

A small debate on Twitter tonight prompted discussions on the relative power consumption of the ALP's FTTH NBN and the Liberal's proposed FTTN model.  My initial thoughts were this was a do-able comparison, but certianly wasn't as simple as some make out.  The most common argument from the FTTH mob is that the local distribution cabinets use a passive splitter technology (PON - passive optical network), and so don't need power, whereas the FTTN cabinets do.  Of course, with FTTH, you still need to convert the optical signal to an electrical signal at the customer's premise, which isn't needed under FTTN.

A big thanks to Tristan (@blondgecko) who posted this link (Baliga, 2011), which is an academic paper comparing the relative power usage of different broadband network topologies


Hungry, Hungry Copper

As you could probably guess, copper is a hungry beast, and the FTTN network topology uses twice the power that FTTH uses.

To put monetary figures on it, in a per person rate, FTTN uses a whole 14W of power compared to only 7W for FTTP.  That might not sound like much, but remember that these systems tend to run 24 hrs a day, 7 days a week etc etc etc.

Do the sums, and the total dollar value of the difference in electricity usage is $12 per household.  Per year.  Certainly doesn't sound like much, but electricity prices have had a habit of increasing at a considerable rate in recent years.

With a 10% increase in power prices a year, over 25 yrs (life of fibre) gives a value of $737 per house in 2013 dollars for the extra power used by a FTTN system over the next 25 years. 
Table 1: Annual per household power cost difference between FTTN and FTTH

Compared to a perhouse cost of approximately $4000, this is a significant extra cost, but this is also not a particularly well-refined model.  While power prices are currently increasing at a high rate, that's no reason to suspect that they will continue doing so for the next 25 years.  Already we are seeing alternative power sources becoming more competitive with fossil fuels, so assuming that after another ten years, the average price increase in power is closer ot the CPI, at 3%, and the total cost difference reduces to a smidge under $500.

One further refinement is to remove the costs of power that the user pays, not the NBN (remembering that we are comparing costs between the two opposing models here).  According to Baliga, at least 65% of the power costs are borne by the final user, which leaves 35% of the $500, or about $175 difference in power costs to NBNco over 25 years.

What does this mean in practical terms?  To get best value for money for the taxpayer, if it is going to cost less than $175 to install fibre from the node to the premise then its a no brainer.  If the copper is still in good nick, and the fibre will cost more than $175 to install, stick with the copper and wear the cost of th extra electricity.

It also means that the government's (and Turnbull's) original FTTN plan, which was going to cost $4.7B, is still quite a bit cheaper than the new and improved FTTP NBN plan, even with power costs factored in.

Where to from here?

If nothing else, I hope this shows that any real economic comparison of the two models is quite complex, and simply yelling that one technology is somehow 'better' than another isn't going to change anybody's mind.