2011 on Brave New Climate

So the year 2011 draws to a close. What a tumultuous year it was, particularly for nuclear energy! For climate change, alas, the freight train just keeps gathering steam.

For 2012, I will expect the unexpected, but also hope to see some better signs of progress towards the downfall of fossil fuels. But really, let’s be honest, that is a decadal rather than year prospect.

Anyway, to the BNC year in review. Below I list some of the most read, most commented and most stimulating or controversial subjects of the past BNC year.

1. Fukushima nuclear crisis: This was the biggest story of the year for the blog. Read about the early diagnosis and explanation, ongoing reports, some technical speculation, an essay on what we can and can’t design for,  preliminary and considered lessons learned, what the INES 7 rating means, and the need to avoid radiophobia with some common sense (and data). Another highlight is Ben Heard in his pre-decarbonisesa.com days

2. Renewables in the context of effective CO2 abatement. Some useful analyses on CO2 avoidance cost with wind, climatologist James Hansen admonishes use to get real about how effective (or ineffective) green energy has been to date at displacing fossil fuels, an adventure to energy debates in wonderland, a look at geographical smoothing, an argument that an energy strategy without nuclear does not have history on its side, Geoff Russell deconstructs the situation for India and Switzerland, and I do so for Germany.

3. More depressing climate trends. Sea ice declines and emissions rise, the cost of climate extremes, complications and realities, a plea to clean up the climate ‘debate’, why the argument of ‘no recent warming’ is statistically invalid, and a graphical review of the grim numbers. Read more »

Global Energy Prize and Breakthrough Institute

Russian President Dmitry Medvedev at the 2008 International Global Energy Prize award ceremony

The Christmas to New Year period is traditionally ‘hibernation mode’ for blogs, when page views and comment counts plummet (my hits have dropped about 70% compared to early December!).

I suppose this is a time when people find better things to do than sit in front of a computer screen (family time, good food, beach/snow [depending on hemisphere], travel, reading, new games and toys, whatever). So during this activity lull, it’s as good a time as any to announce a few little personal triumphs.

Within the last month or so I received two tokens of recognition for my work in the sustainable energy space. To explain what, I reproduce below a short write-up done by the University of Adelaide’s media office. I’ve added a few relevant hyperlinks and cites, for further information.

———————–

International recognition for Environment Professor

The University of Adelaide’s Professor Barry Brook — an environmental scientist who holds strong pro-nuclear energy views — has received recognition from two prominent international bodies.

Professor Brook, who is Director of Climate Science at the University’s Environment Institute, has become the first Australian appointed to the international award committee of the $1.2 million Global Energy Prize.

Known as the “Nobel Prize of Energy”, this is the most prestigious international award granted for outstanding scientific achievements in the field of energy that have benefited the human race. From Wikipedia:

The Global Energy Prize is an independent award for outstanding scientific research and technological development in energy, which contribute to efficiency and environmentally friendly energy sources for the benefit of humanity.

The award was established in Russia, through the non-commercial Global Energy partnership and with the support of leading Russian energy companies Gazprom, FGC UES and Surgutneftegaz. Laureates are presented with their award by the President of Russia.

The Global Energy Prize promotes energy development as a science and demonstrates the importance of international energy cooperation, as well as public and private investment in energy supply, energy efficiency and energy security. It stands for the belief that advances in science and technology should serve the long-term interests of human development, improving social security and living standards of people in all countries.

Barry Brook

Professor Brook has also been made a 2012 Senior Fellow at the California-based think tank, The Breakthrough Institute.

The Institute is dedicated to “modernizing liberal thought for the 21st Century” and creating “secure, free, prosperous, and fulfilling lives on an ecologically vibrant planet”.

Both appointments are in recognition of Professor Brook’s work on energy policy. He holds strong views on the use of nuclear energy and alternative energy systems from an economic, environmental and scientific point of view.

“I’m honoured to have been chosen for the international selection committee of the Global Energy Prize and as a fellow of The Breakthrough Institute within a short space of each other,” Professor Brook says.

“Although many environmentalists consider nuclear power to be somehow anti-environment, it’s my firm belief that nuclear energy actually offers a viable low-carbon, low-impact alternative that cannot be matched by other low-carbon solutions.

Read more »

Feeding 10 billion in 2050′s sauna (Part III)

What future for agriculture on a hotter planet?

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. His previous article on BNC was: Feeding the billions on a hotter planet (Part II)

——————

Welcome to Part III of my still presumptuously titled series on feeding the world in 2050.

I spent the first two parts of this series looking at global authorities like the FAO (United Nations Food and Agriculture Organisation) with its predictive obsession and its policy associate IFPRI (International Food Policy Research Institute) with its meat obsession. Writing in a similarly obsessed country with far more cattle than people, I felt compelled to add a special section on protein and to also quantify the place of meat, particularly sheep and cattle meat, on the world food stage. Cattle are a major player in climate change, biodiversity loss and general environmental destruction but both they and sheep are globally irrelevant to food security. But worse than being irrelevant, their net contribution may well be negative. Here are some of the negative impacts:

  1. Reductions in the productivity of the land that produces real food. These reductions are via physical soil damage, consumption of crop residues which protect the soil, the deliberate burning of areas that are croppable to maintain them as pasture.
  2. Fouling water. Lack of clean water is the second biggest cause of malnutrition.
  3. Acting as disease generators. I mentioned Cryptosporidium in the last post, but livestock are also major generators of novel rotavirus strains. Rotavirus kills a million children annually, with vaccination not always available in the developing world. We don’t need new strains.
  4. The direct sickening and killing of children and women via the use of animal dung as a fuel.
  5. The reduction in the global food supply by making feed production more profitable than food production. The last impact doesn’t always apply to sheep and cattle but is more general. People with the perspicacity to easily recognise this problem in the context of biofuels are almost universally blind to its existence elsewhere.

Today, in the last of the series, I want to look some standout scientific work that breaks the predictive meat obsessed mould. This is work by Jonathan Foley and Navin Ramankutty and a sizeable group of associated researchers. I’ll call this the “FR” work, but keep in mind that there are many other researchers involved.

This work breaks the mold because it isn’t concerned with mere prediction, like that of the FAO. Nor is it obsessed with meat as a food but rather it recognises meat’s central role in reducing global food Calories.

Read more »

Fukushima and nuclear power, 9 months on

As many BNC readers already know, I was invited to write an opinion essay for ABC Environment and The Drum: Unleashed on the Fukushima situation as we approach the end of 2011. On the latter site, the essay was entitled “Fukushima, nuclear and the rational approach to energy” and drew >300 comments (many rather heated) before the post was closed after 24 hours. Anyway, here’s a chance for you to continue to conversation, and perhaps to provide a correction to some of the more… unenlightened… comments that appeared over on the ABC stream.

—————————-

It’s been quite a year for nuclear power. The dramatic events at the Fukushima Daiichi nuclear plant in north-east Japan March and April 2011, following the Great Tōhoku Earthquake and tsunami, made headlines around the world. It constituted the most significant nuclear emergency in 25 years.

Nine months on, engineers continue to work to secure the plant and transition to a state termed ‘cold shutdown’, whereby the radioactively decaying reactor fuel is consistently cooled to below 100°C. The mangled reactor buildings now have new protective shells to keep out the weather, and an elaborate water purification system on site is working steady to decontaminate the large amount of contaminated cooling water that accumulated in holding tanks during the months following the accident.

The evacuation zone of 20 km around the plant remains in place, with more than 100,000 people displaced. There are medium-term plans to scrape away the topsoil in those ‘hotspots’ where radioactive cesium-137 was deposited (somewhat randomly) by the winds, following steam venting and the hydrogen explosions that occurred in the first week of the crisis. Once this is done, it is probable that residents will be allowed to return to the tsunami- and earthquake-ravaged area, to rebuild their lives.

Read more »

Draft Energy White Paper – Discussion Thread

———————

Guest post by John MorganJohn runs R&D programmes at a Sydney startup company. He has a PhD in physical chemistry, and research experience in chemical engineering in the US and at CSIRO. He is a regular commenter on BNC.

Energy minister Martin Ferguson has today released the Draft Energy White Paper 2011 (The Australian, ABC). The Government is soliciting submissions , so with a quick review, I’d like to open some discussion on possible material for a submission.

So what’s in the white paper? In short, lots of new gas development, energy market privatization, and “…the Gillard Government unambiguously does not support the use of nuclear energy in Australia”.

But Ferguson does seem to be determined to inject some ambiguity into the matter. Elaborating on this unambiguous position he explained:

Nuclear for Australia is always there as an option. We don’t have to invest in R and D and innovation on that front. Other nations are the specialists. But if we get to the end of this debate some years in the future and we haven’t made the necessary breakthrough on clean energy at a low cost outcome, then nuclear is there for Australia to blow off the shelf after a debate in Australia.

His Opposition counterpart Ian Macfarlane is singing from the same song sheet:

We haven’t had any active consideration of nuclear energy in Australia but the fact remains that nuclear energy is the one base load technology that is clean energy and until we find a better alternatives to clean, zero-emission energy than nuclear, then it’s going to remain on the agenda of other countries.

And of course the Greens are furious.

The white paper itself expresses this unambiguous position in remarkably equivocal terms. The full position on nuclear power is buried right at the back of the document on page 223 in a text box aside from the main text, where it is offered as a ‘contingency’ consideration. I will quote this in full:

• Australia’s plentiful natural endowment of a range of low‐cost energy resources has played a major role in shaping our energy generation base around coal and gas.

• Other countries have chosen to adopt nuclear power often as a way of diversifying their energy mix. As one of the world’s largest uranium exporters, Australia has respected and supported this right through trade under strict safety and security safeguards. Nuclear‐powered electricity generation currently produces around 16 per cent of the world’s electricity – around 10 times Australia’s total annual electricity generation. Undeniably this results in lower global carbon emissions.

Read more »

The Guardian questions: thorium, shale gas, off-grid renewables, and much more…

The Guardian newspaper’s Environment Facebook page recently put the following to their readers:

Ask the Global Energy Prize‘s expert panel your toughest energy questions and they’ll be back here on Friday with their answers. What should power our cities, homes and industry in the future — renewable energy, nuclear power, or fossil fuels? How significant will shale gas be? And what role will oil play in our energy future? Just post your energy Qs here. 5 experts will answer the 10 best questions: Harry Fair (US), Tom Blees (US), Thorsteinn Sigfusson (Iceland), Barry Brook (Australia) and Klaus Riedle (Germany).

Below are the six questions put to me (Barry Brook) and Tom Blees — and our answers, of course! The original answers were not hyperlinked, but if you are curious about anything we mention here, try searching for the keywords on this website (e.g. type bravenewclimate.com/?s=thorium in your browser address bar), or on Google (e.g. type  ”ammonia site:bravenewclimate.com” in your search box).

———————-

BARRY W. BROOK

Q1. Do you agree that Thorium power is a safe, plentiful, and viable energy source that should be investigated as a matter of urgency?

Yes, thorium power is an attractive prospect for the next generation of nuclear reactors, but then surprisingly enough, so is uranium.

For today’s reactors, it takes about 150 tonnes of natural uranium to fuel a 1 gigawatt (GW) power plant for an entire year (the total energy produced is called a gigawatt year, or GWyr).  One GWe of power (the ‘e’ stands for electrical power rather than ‘t’ for thermal power, or heat) is a huge amount. It’s enough to run 65 million desk lamps (assuming they used 15 W compact fluorescent globes), or more practically, to satisfy today’s electricity demand of a typical UK city of more than half a million people. For comparison, to deliver a GWyr of energy using a coal-fired power station, about 4 million tonnes of coal must be burned (the amount can vary depending on the grade of coal).

Most of the nuclear power stations in use today are called ‘thermal reactors’, or ‘light water reactors’ (LWR). They use ordinary (‘light’) water as a coolant, which take heat away from the reactor core. The water also acts as a ‘moderator’, slowing down subatomic particles called neutrons, which shoot out of the atom’s nucleus when a nuclear chain reaction is underway. These neutrons are responsible for causing unstable heavy atomic nuclei to split apart and release energy. Other reactor designs use heavy water (enriched in ‘heavy hydrogen’: deuterium) or graphite (a form of carbon found in pencils) to moderate the neutrons (the latter is used in the UK’s gas-cooled Magnox reactors, for instance), but the effect is similar. These nuclear power plants need, as fuel, a form (isotope) of uranium that has 143 neutrons in its nucleus, called 235U (or ‘uranium 235’). Yet natural uranium contains 0.7% 235U; the other 99.3% is composed of an isotope that has 3 additional neutrons, called 238U (or ‘uranium 238’). As a result, today’s LWRs are able to extract less than 1% of the atomic energy content of uranium. The rest is discarded, unused, either as spent fuel (‘nuclear waste’) or as depleted tails (the leftovers, composed mostly of 238U, after the fuel has been ‘enriched’ to raise the concentration of 235U to 3 – 5%).

Read more »

Open Thread 20

The previous Open Thread has gone past is running of the recent posts lists and getting tough to find, so it’s time for a fresh palette.

The Open Thread is a general discussion forum, where you can talk about whatever you like — there is nothing ‘off topic’ here — within reason. So get up on your soap box! The standard commenting rules of courtesy apply, and at the very least your chat should relate to the general content of this blog.

The sort of things that belong on this thread include general enquiries, soapbox philosophy, meandering trains of argument that move dynamically from one point of contention to another, and so on — as long as the comments adhere to the broad BNC themes of sustainable energy, climate change mitigation and policy, energy security, climate impacts, etc.

You can also find this thread by clicking on the Open Thread category on the cascading menu under the “Home” tab.

———————

A new temperature reconstruction by Foster & Rahmstorf (Env. Res. Lett.), which removes ENSO signals, volcanic eruptions and solar cycles, and standardises the baseline.

I’m currently in Auckland, New Zealand, attending the 25th annual International Congress on Conservation Biology. A 4-day event, it’s a great chance to network and catch up with my colleagues, hear the latest goings on in the field of conservation research, and also give a few presentations (me and my students). I’m talking tomorrow on the impacts of climate change in Oceania — this covers a co-authored paper I have coming out in an upcoming special issue of Pacific Conservation Biology (which was actually the first journal I ever published in, back in 1997), entitled: “Climate change, variability and adaptation options for Australia”.

A conversation starter: George Monbiot has written a superb piece on nuclear power and the integral fast reactor over at The Guardian. It is titled “We need to talk about Sellafield, and a nuclear solution that ticks all our boxes” (subtitle: There are reactors which can convert radioactive waste to energy. Greens should look to science, rather than superstition). My favourite quote:

Anti-nuclear campaigners have generated as much mumbo jumbo as creationists, anti-vaccine scaremongers, homeopaths and climate change deniers. In all cases, the scientific process has been thrown into reverse: people have begun with their conclusions, then frantically sought evidence to support them.

The temptation, when a great mistake has been made, is to seek ever more desperate excuses to sustain the mistake, rather than admit the terrible consequences of what you have done. But now, in the UK at least, we have an opportunity to make amends. Our movement can abandon this drivel with a clear conscience, for the technology I am about to describe ticks all the green boxes: reduce, reuse, recycle.

George’s essay includes details on the integral fast reactor and the S-PRISM modules that GEH hope to build in the UK (to, as a first priority, denature the separated plutonium stocks, and thereafter generate lots of carbon-free electricity). The fully referenced version is here.

Read more »

Feeding 10 billion on a hotter planet (Part II)

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. His previous article on BNC was: Feeding the billions in 2050′s sauna (Part I)

——————

Welcome to Part II of my presumptuously titled series on feeding the world in 2050. Before concluding where we left off with the analysis of the foods which the International Food Policy Research Institute (IFPRI) thinks are globally important, we need a short prologue on protein.

Protein prologue

Any suggestion based on Calorie counts that the net contribution of beef or other meats to global food security may be trifling or even negative brings instant feedback about protein. The presumption is that it is adequate protein, particularly animal protein, which is the key requirement for beating malnutrition. This is inevitable for two reasons: first, the absence of medical malnutrition literature from the best seller list, and second, we have all spent our entire lifetime swimming in meat industry propaganda … much of it focused on protein.

We need some historical perspective on protein.

There’s nothing quite like being the first, and protein can lay good claim to being the first critical nutrient discovered in the early days of modern chemistry. Nitrogen is protein’s key chemical component and one of the first to be accurately measured. Consequently, quite precise measurements of protein utilisation in people have been around for almost 200 years.

Early investigators fed dogs pure sugar diets and watched them die. Absence of protein was the explanation they eventually settled on. What else could it have been? In 1815, vitamins (in any measurable sense) were well beyond the knowledge horizon, so there was really only one candidate. By 1842, protein was pronounced the only true nutrient and the sole provider of energy to the muscles. It mattered not that measurements on prison work gangs showed no differences in protein utilisation on rest days and hard treadmill days. The history of protein spin is a picturesque tale of arrogant opinionated people holding fast to beliefs in the face of overwhelming data. Not everyone was fooled. US Yale University researchers in 1907 took athletes and halved their protein intake during a mammoth 5 month piece of live-in research. Over the 5 months, far from fading away, the subjects got stronger by 35%. The protein myth charged on regardless, pushed by the then head of the US Agriculture Department who thought (seriously) that when people could choose food without regard for cost or availability, they would choose an optimal diet. i.e., the rich must know best.

Read more »

Solar combined with wind power: a way to get rid of fossil fuels?

Guest Post by Jani-Petri MartikainenJani-Petri is a theoretical physicist doing fundamental research in the field of ultracold quantum gases. Most of his current research activities are computational and involve bosonic or fermionic atoms in optical lattices. He has a lively interest on environmental, climate, and energy issues. He runs the blog PassiiviIdentiteetti, which is mostly written in Finnish.

Jani’s previous post, Geographical wind smoothing, supergrids and energy storage, focused on distributed wind alone. In this follow-up, he turns his attention to solar combined with wind.

————
Earlier, I wrote on how crucially an unreliable sources of power such as wind depend on fossil fuels. Based on real world production data from around the world, I noted that even with massively distributed production wind power is very variable and necessitates a reliable backup power source (typically from fossil fuels) which must be able to produce essentially all the power society consumes. A way around this problem would be a massive energy storage, but I found the size of the required storage to be unreasonably large.

One typical response to findings such as these, is to brush them aside by claiming that even if true, the results will not matter since we will have many different renewable energy sources acting together (as if there is some “harmony” in two essentially random signals). Most importantly quite a few people base their vision of future energy production on a mixture of wind and solar power. For this reason I felt the need to return to this problem so that also solar power is considered. Unfortunately, I have yet to find a good source for real world production data for solar power. The best I have come up with are images (typically of the daily production), but raw data is better hidden.

However, since solar power (without storage) production is proportional to insolation we can use meteorological data as a reasonable starting point. US has a National solar radiation database which has large collection of insolation modelling data around USA. From this data they have also formed a “typical meteorological year 3 (TMY3)” datasets. (There are some quirks in the construction of TMY3 that I frown upon. For example, after El Chichón and Mount Pinatubo eruptions insolation was reduced, but these periods were apparently excluded from the TMY3 as atypical. Of course they were atypical, but they are still things that do happen and whose effects must be considered. However, I suspect that the effect due to eruptions was still minor in US.) As my insolation data I take the average of TMY3 data from six different class I sites (class I has the best data) in three different states: Prescott Love and Tucson Airport in Arizona, Arcata Airport and Fresno Yosemite Airport in California, and Denver Airport and Limon in Colorado. These sites have an average latitude similar to southern Spain. (Why did I choose these sites? Well, being lazy I started from the entries listed in alphabetical order by states and picked the first southern states I encountered.)

Somewhat annoyingly only hourly data is provided. We know from BNC among others that solar power (especially PV) can have large swings on shorter timescales. Therefore, this limitation may have important consequences. Nevertheless, let us ignore the torpedoes with an understanding that the solar power we talk about here is such that sufficient storage has been already implemented to smooth out hourly variation in production. So keep in mind, that the starting assumptions for solar production have a bias towards the optimistic side. Since the production data for wind power is given every 5 minutes I will linearly interpolate the solar insolation data to deduce the production of solar power every 5 minutes (link to the data here). As in the earlier study the data corresponds to one year starting July the 1st. and the consumption data corresponds to the Bonneville Power Authority load with a possible scale factors to suit my needs.

Read more »

Summary of China’s fast reactor programme

China is looking seriously at a range of nuclear options. From the commercial side of things, the country is building over 25 light water reactors, including four of the new US-designed AP1000. The Chinese are also pursuing a range of advanced reactor programmes, including gas-cooled pebble-bed modular reactors (the 210 MWe HTR-PM), a thorium-focused research initiative based on the molten-salt reactor, and an ambitious fast spectrum reactor research, demonstration and deployment (RD&D) plan. It is the latter that I wish to discuss here.

Some of you would already know that the Chinese are in the late stages of planning the construction of two Russian-designed BN-800 sodium-cooled fast reactors, to be located at a site on China’s east coast. These are scaled-up (880 MWe) versions of the BN-600, which has run successfully in Russia for a number of decades. There is also the Chinese Experimental Fast Reactor (CEFR), a 25 MWe demonstration unit near Bejing.

Before I get to the main point of this post, it is worth reproducing this WNA summary of the current Chinese builds:

In China, R&D on fast neutron reactors started in 1964. A 65 MWt fast neutron reactor – the Chinese Experimental Fast Reactor (CEFR) – was designed by 2003 and built near Beijing by Russia’s OKBM Afrikantov in collaboration with OKB Gidropress, NIKIET and Kurchatov Institute. It achieved first criticality in July 2010, can generate 20 MWe and was grid connected in July 2011 at 40% of power, to ramp up to 20 MWe by December. Core height is 45 cm, and it has 150 kg Pu (98 kg Pu-239). Temperature reactivity and power reactivity are both negative.

A 1000 MWe Chinese prototype fast reactor (CDFR) based on CEFR is envisaged with construction start in 2017 and commissioning as the next step in CIAE’s program. This will be a 3-loop 2500 MWt pool-type, use MOX fuel with average 66 GWd/t burn-up, run at 544°C, have breeding ratio 1.2, with 316 core fuel assemblies and 255 blanket ones, and a 40-year life. This is CIAE’s “project one” CDFR. It will have active and passive shutdown systems and passive decay heat removal. This may be developed into a CCFR of about the same size by 2030, using MOX + actinide or metal + actinide fuel. MOX is seen only as an interim fuel, the target arrangement is metal fuel in closed cycle.

However, in October 2009 an agreement was signed with Russia’s Atomstroyexport to start pre-project and design works for a commercial nuclear power plant with two BN-800 reactors in China, referred to by CIAE as ‘project 2′ Chinese Demonstration Fast Reactors (CDFR) – in China, with construction to start in 2013 and commissioning 2018-19. These would be similar to the OKBM Afrikantov design being built at Beloyarsk 4 and due to start up in 2012. In contrast to the intention in Russia, these will use ceramic MOX fuel pellets. The project is expected to lead to bilateral cooperation of fuel cycles for fast reactors.

Read more »

CO2 is a trace gas, but what does that mean?

Carbon dioxide, methane, nitrous oxide and most other long-lived greenhouse gases (i.e., barring short-lived water vapour), are considered ‘trace gases’ because their concentration in the atmosphere is so low. For instance, at a current level of 389 parts per million, CO2 represents just 0.0389% of the air, by volume. Tiny isn’t it? How could such a small amount of gas possibly be important?

This issue is often raised by media commentators like Alan Jones, Howard Sattler, Gary Hardgrave and others, when arguing that fossil fuel emissions are irrelevant for climate change. For instance, check out the Media Watch ABC TV story (11 minute video and transcript) called “Balancing a hot debate“.

I’ve seen lots of analogies drawn, in an attempt to explain the importance of trace greenhouse gases. One common one is to point out that a tiny amount of cynanide, if ingested, will kill you. Sometimes a little of a substance can have a big impact.  But actually, there’s a better way to get people to understand, and that’s to follow one of the guiding principles of this blog: “Show me the numbers!“.

In response to a recent post by John Cook on George Pell, religion and climate change, commenter Glenn Tamblyn pointed out an interesting fact: Every cubic metre of air contains roughly 10,000,000,000,000,000,000,000 molecules of CO2. In scientific notation, this is 1022 — a rather large number.

Read more »

Feeding the billions in 2050′s sauna (Part I)

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. His previous article on BNC was: The Swiss army nuclear knife

——————

During the past few years, all the world’s major science journals have had a steady stream of papers on the challenge of feeding 9 to 10 billion people on a warming planet in 2050. They have been joined by reports from bodies with varying prestige and influence likeInternational Food Policy Research Institute (IFPRI)The World Bank and the Royal Society. CSIRO has a long history of interest in the issue and even billionaire packager Anthony Pratt is getting in on the act telling Australia that since it can produce food for 200 million people, it has a responsibility to do so.

All these reports pay swollen lip service to the food security issues of the poor. All rightly regard the current global levels of stunting and malnutrition … running at 30 percent or more in many poor populations … as unconscionable.

Do we simply need more of the same?

Most of these papers and reports fall into two groups. The first looks at population and food intake trends and guesstimates that adding 2 to 3 billion people by 2050 will require between 70 percent and 100 percent more food. They typically then suggest places where large buckets of money might be deposited to fund research directed at meeting these projections.

Read more »

The IFR vs the LFTR: An Exchange of Emails

With regards to Generation IV nuclear fission technology, most of the attention on BNC has been on the Integral Fast Reactor (IFR), for reasons explained in this post, which I quote:

The focus of this series (IFR FaD) is aimed squarely at the Integral Fast Reactor (IFR) rather than other Gen IV designs, such as the Liquid Fluoride Thorium Reactor (LFTR) or Advanced High Temperature Reactor (AHTR). The reason for this is two fold: (i) I’m more familiar with the IFR technology (and I am in regular email exchange with the world experts on this technology, via SCGI and other links), and (ii) LFTR has a strong and welcoming advocacy group elsewhere, and I’d encourage people to go there to ask more questions about that technology … However, I should make it quite clear that I’m not “for IFR and against LFTR” — both 4th generation nuclear designs hold great appeal to me, and I will sometimes consider IFR vs LFTR comparisons in the IFR FaD series, as a point of comparison or contrast.

I think we need to be pursuing the final stages of research, development and commercial-scale deployment of all of these next-generation fission technologies, since it would require such a trivial input compared to the huge investment that will be required anyway in energy infrastructure over the next few decades (>$26 trillion globally by 2030). However, it is nevertheless useful to consider the relative merits of the individual technologies, and I hope to look at this from a number of angles in blog posts during 2012.

For some initial ideas and to initiate discussion, below I reproduce an email exchange on this matter, including aspects of commercial readiness,  that was recently posted on the Science Council for Global Initiatives website. The conversation is from three highly experienced nuclear physicists/engineers, Dr George Stanford, Dr Dan Meneley, and Prof. Per Peterson. I’m sure this will stir some debate! (And, as I said, I will have more to post on this in the new year).

I have also added a few hyperlinks to clarify terms that may be unfamiliar to the general reader; please note that the links and pictures were added by me (Barry Brook), not the original correspondents.

—————-

G. Stanford wrote (11-29-10):

We’ll see what others on this list have to say, but in my opinion, Carlsen’s enthusiasm for thorium is premature, to say the least.  The ONLY significant advantage a thorium cycle would have over fast reactors with metallic fuel (IFR/PRISM) is its lower requirement for start up fissile.  That advantage is offset by the fact that the thorium reactor is at a stage of development roughly equivalent to where the IFR was in 1975 — a promising idea with a lot of R&D needed to before it’s ready for a commercial demonstration — which puts its deployment about 20 years behind what could be the IFR’s schedule.  The thorium community has not yet even agreed on what will be the optimum thorium technology to pursue.

Read more »

Energy Storage Discussion Thread

For high-penetration utility-scale wind, we'll need much bigger batteries than these...

Debate over large-scale energy storage is a regular theme in the comments on this blog. The post is intended to be a place to centralise this discussion. Some questions that might be considered in the comment thread:

1. What is the cost (per Watt hour, kWh, MWh, GWh — how does this cost scale up, and how does this scale as higher levels of reliability are required, e.g. energy delivered on demand 90% vs 99% vs 99.9% of the time)?

2. What is the energy density of the proposed storage technology currently, and what are its physical limits? (i.e., how good can it get, with perfect engineering, and how long can the energy store be held?)

3. If the storage technology becomes cheap, what is to stop baseload plants like coal and nuclear from undercutting renewables, given that they can charge large batteries in low-demand times and then sell the power during peak (high-price) periods?

4. What are the material inputs for the storage system, and how does this effect the energy returned on energy invested of the paired energy technology (e.g., what is the EROEI and life-cycle CO2 emissions of, say, a 2kW solar PV system with no storage vs the same system with 10 hours battery storage to cover nights [ignoring winter and long cloudy periods])?

5. Lifetime: how many cycles can the storage technology handle (100, 10,000, near-indefinite [e.g. conversion to hydrogen])?

6. Does the storage technology need its own power-generation system, or can it be paired to the original generating technology (e.g., a molten salt heat storage can create steam for use in the same turbine set as the solar thermal plant itself, whereas compressed air energy storage for wind requires a different generation system to the wind itself)?

(If people can propose some other general questions, I’ll add them to this list)

Anyway, to kick the discussion off, here is something sent to me by George Stanford, in response to the following missive:
Read more »

CEDA report on Australia’s nuclear energy options

Today I was in Melbourne, joining a panel of five who are the chapter authors of a new policy monograph called “Australia’s Nuclear Options“. This event was to formally launch the 61-page report, which was commissioned and published by CEDA (Committee for Economic Development of Australia), edited by CEDA Chief Economist Nathan Taylor (who also writes a blog, The Naked Ape,  and provided a terrific lead-in essay to introduce the report), with the chapters written by five independent Australia-based experts.

It was a very interesting event, with over an hour of questions and commentary after some opening remarks from each of the five panelists (me [Barry Brook], Tony Irwin [Visiting Lecturer in Nuclear Science, Australian National University and University of Sydney, and Chairman, Engineers Australia Nuclear Engineering Panel], Professor Tony Owen [Academic Director and Santos Chair of Energy Resources, UCL School of Energy and Resources], Tom Quirk [Ex-Oxford Don, Physicist and Director, Institute of Public Affairs] and Tony Wood [Director - Clean Energy Program, Clinton Foundation and Grattan Institute]). There will be a similar launch in Adelaide on 29 November.

Here is the summary:

Australia is at a critical moment in determining its energy future. Energy demand is forecast to rise substantially with continued economic and population growth, while policy makers grapple with how to decarbonise the economy. Meanwhile, global growth in energy demand is causing ongoing price rises in commodities. Given the long lifecycle of energy investments, policy decisions made to address these challenges will determine Australia’s economic competitiveness for decades to come.

The need to decarbonise the economy, and technological changes, have the potential to fundamentally alter the economic and engineering issues of nuclear power deployment, making it far more relevant for consideration in Australia.

This policy perspective is part of CEDA’s major research project on ‘Australia’s Energy Options‘ which examines a range of issues associated with Australia’s energy sector that will be released throughout 2011/12.

Read more »

Strange bedfellows? Techno-fixes and conservation

I have a new paper out in the peer-reviewed journal Biological Conservation that will be of interest to BNC readers.

It is called “Strange bedfellows? Techno-fixes to solve the big conservation issues in southern Asia“, by Barry W. Brook & Corey J.A. Bradshaw. Here are some details:

Abstract

The conservation challenges facing mega-biodiverse South and Southeast Asia in the 21st century are enormous. For millennia, much of the habitat of these regions was only lightly modified by human endeavour, yet now it is experiencing rampant deforestation, logging, biofuel cropping, invasive species expansion, and the synergies of climate change, drought, fire and sea-level rise. Although small-scale conservation management might assist some species and habitats, the broader sweep of problems requires big thinking and some radical solutions. Given the long expected lead times between progressive economic development and stabilization of human population size and consumption rates, we argue that ‘technological fixes’ cannot be ignored if we are to address social and fiscal drivers of environmental degradation and associated species extinctions in rapidly developing regions like southern Asia.

The pursuit of cheap and abundant ‘clean’ energy from an economically rational mix of nuclear power, geothermal, solar, wind, and hydrogen-derivative ‘synfuels’, is fundamental to this goal. This will permit pathways of high-tech economic development that include intensified (high energy-input) agriculture over small land areas, full recycling of material goods, a transition from fossil-fuel use for transport and electricity generation, a rejection of tropical biofuels that require vast areas of arable land for production, and a viable alternative to the damming of major waterways like the Mekong, Murum and northern tributaries of the Ganges and Brahmaputra Rivers for hydroelectricity. Rational approaches that work at large scales must be used to deal with the ultimate, rather than just proximate, drivers of biodiversity loss in the rapidly developing regions of southern Asia.

Depressing climate-related trends – but who gets it?

I saw two particularly depressing trend lines this week. Both were confronting enough to make me stop, sit back and just contemplate. It was not as though these came as a great surprise — I’d been following these data for years. But for some reason, the seriousness of them really struck home like never before.

The first was a report on Arctic sea ice volume. Here is the graph that shocked me:

It shows the minimum northern hemisphere sea ice volume yearly from 1979 to 2011, and a simple time-series forecast based on a fit of the exponential-decline model. You can read about the details here: PIOMAS September 2011 (volume record lower still), where various related charts are also shown. One can argue about the precision of the projection line, but the general fit is remarkably robust and, on this basis, it is reasonable to conclude that unless some remarkable turn around occurs, the Arctic summer ice volume will be near-zero by 2020. Read more »

CO2 abatement cost with electricity generation options in Australia

Guest Post by Peter LangPeter is a retired geologist and engineer with 40 years experience on a wide range of energy projects throughout the world, including managing energy R&D and providing policy advice for government and opposition. His experience includes: coal, oil, gas, hydro, geothermal, nuclear power plants, nuclear waste disposal, and a wide range of energy end use management projects.

A 10-page printable PDF version of this post can be downloaded here.

An Excel worksheet showing the calculations (allowing you to change inputs/assumptions) is also available.

Introduction

What is the cost of carbon dioxide (CO2) emissions abatement with the various electricity generation technologies being considered for Australia?

The abatement cost of a technology depends on many factors such as the engineering characteristics of the electricity grid to which the new technology will be connected, the geographic location and many others.  One important factor often not mentioned is the reference case against which the abatement cost is calculated.  The abatement cost for a new technology is only meaningful when compared with another new technology or with an existing generator it would ‘displace’; e.g. nuclear compared with a new coal power station or nuclear compared with an existing power station.

The Electric Power Research Institute (EPRI, 2010) report http://www.ret.gov.au/energy/Documents/AEGTC%202010.pdf for the Australian Department of Resources, Energy and Tourism provides data that allows CO2 abatement costs to be estimated for a range of new technologies. Unfortunately, the report is complex and opaque in parts.

The purpose of this paper is twofold:

  1. to summarise in tabular form the relevant information from the EPRI report so others can access it easily and produce levelised cost of electricity (LCOE) figures under differing assumptions, particularly using the NREL LCOE calculator http://www.nrel.gov/analysis/tech_lcoe.html .
  1. to calculate and compare the CO2 abatement costs for a range of new technologies for each of three ‘displaced’ technologies.

This paper does not attempt to calculate the effects of carbon price on the LCOE or CO2 abatement costs, because:

1)     the EPRI report does not include the effects of carbon price — nor feed in tariffs, renewable energy certificates and other subsidies — so incorporating the effect of CO2 pricing, and other incentives and disincentives in the analysis would require many additional assumptions, and

2)     the purpose of this paper is to show the abatement costs for the various technologies so options can be compared and so the cost of incentives and disincentives (including carbon pricing), which would be needed to make each technology viable, can be made visible.

Read more »

Geographical wind smoothing, supergrids and energy storage

Guest Post by Jani-Petri MartikainenJani-Petri is a theoretical physicist doing fundamental research in the field of ultracold quantum gases. Most of his current research activities are computational and involve bosonic or fermionic atoms in optical lattices. He has a lively interest on environmental, climate, and energy issues. He runs the blog PassiiviIdentiteetti, which is mostly written in Finnish.

————

For quite some time I have been troubled by the difficulty of finding open and sensible discussions on energy scenarios where erratic energy sources such as wind and (somewhat less erratic) solar provide the bulk of the power produced. Proponents of such alternatives routinely talk as if scaling such energy sources up to significant levels poses no insurmountable challenges or costs that the society cannot afford. One can often read claims such as:

By aggregating power generation from wind farms spread across the whole (North Sea) area, periods of very low or very high power flows would be reduced to a negligible amount. A dip in wind power generation in one area would balanced by higher production in another area. European renewable energy council and Greenpeace (page 34).

Strangely, proponents feel comfortable in making such statements, but show a noticeable lack of interest in actually demonstrating whether the statements are true. Why is this? In science the burden of proof falls upon the claimant and it would be desirable if the same  principle were to apply to discussions about energy policies. (Notice by the way, that EREC+GP are not even satisfied with claiming that wind speeds in different parts of the North Sea are uncorrelated, but actually claim that speeds are anti-correlated.)  Why is it, that an amateur like me [in energy analysis] feels the need to do his own computations to figure out such issues rather than just being able to read proper studies online?

Since it appears difficult (certainly outside academic journals) to find detailed numbers on how strongly, for example, wind power actually relies on fossil fuels, I decided to do some estimates myself. I am not primarily interested in cosmetic amounts of wind power production, but will take the ambitious renewable visions seriously and study scenarios where wind power would be enough to power our entire society. I want to understand to what extent electricity production in such scenarios still relies on reliable energy sources and what kind of energy storage is required to enable wind power to stand on its own feet. Since hydropower capacity at a global level is limited, I will mostly use the term “reliable energy source” as an euphemism for fossil fuels. Not to be too parochial and allow for massively distributed generation,  I will assume a “super(duper?)grid” coupling wind power sources from three different continents together.

As a starting point I want to create a production profile based on real wind power production data. As sources I choose south-eastern Australia, Ireland, and the Bonneville Power Administration in Oregon, US. Each has roughly comparable amounts of wind power installed, but I will scale the capacity of each to 3333 MWe so that the combined capacity will end up being 10 GWe (peak). Data for BPA and Australia is given every 5 minutes while the Irish data is every 15 min.

Read more »

Fuel use for Gen III+ nuclear power

In one of the entries on my series of posts on the Integral Fast Reactor, I pointed out that a next-generation nuclear-power-plus-full-fuel-recycling plant would require only 1 tonne of natural uranium fuel (or thorium, or nuclear waste, or depleted uranium) per year, for a 1,000 MWe plant. However, I recently got asked this related question:

Do you know of any sources where I can find what the fuel requirements would be for a typical 1 GW Gen 3 plant running for a year?

This is an interesting question. Two obviously modern plants to consider are the Westinghouse AP1000 (four are currently under construction in China) and the AREVA EPR (two are being built in Europe).

The AP1000 uses 4.25 % enriched fuel and achieves a burnup of 60 GWd/t (details here). The EPR uses 5% enriched fuel to get 62 GWd/t (details here). The following Excel table illustrates my calculations (blue = inputs, green = calculations, bold = results) — click on the table to download the .xlsx file and play around with it yourself.

This estimates a natural uranium metal use of 108 to 117 tonnes U per GWe per year, using an enriched fuel loading of 21 to 25 t for the two designs (1,115 and 1,650 MWe respectively, running at about 92% capacity factor). The EPR appears to be slightly more efficient than the AP1000 when levelised on a 1 GWe basis.

Note: If 0.2% U-235 tails were left over after enrichment (rather than 0% assumed above), then the value in row 9 (% nat) would become 0.51, and the corresponding U/GWe/yr for the AP1000 would be 163 t, and for EPR it would be 150 t.

My calculations, based on the performance documentation, are similar to the generalised calculations provided by the WNA, as given below: Read more »

Follow

Get every new post delivered to your Inbox.

Join 3,601 other followers