Reply to Patrick Brown’s response to my article commenting on his Nature paper

Introduction

I thank Patrick Brown for his detailed response (also here) to statistical issues that I raised in my critique “Brown and Caldeira: A closer look shows global warming will not be greater than we thought” of his and Ken Caldeira’s recent paper (BC17).[1] The provision of more detailed information than was given in BC17, and in particular the results of testing using synthetic data, is welcome. I would reply as follows.

Brown comments that I suggested that rather than focusing on the simultaneous use of all predictor fields, BC17 should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. He goes on to say: “Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.”

To clarify, I argued that BC17 undersold the statistical strength of the relationships involved, in the RCP8.5 2090 case focussed on in their Abstract, for which the signal-to-noise ratio is highest. But I went on to say that I did not think the stronger relationships would really provide a guide to how much global warming there would actually be late this century on the RCP8.5 scenario, or any other scenario. That is because, as I stated, I disagree with BC17’s fundamental assumption that the relationship of future warming to certain aspects of the recent climate that holds in climate models necessarily also applies in the real climate system. I will return to that point later. But first I will discuss the statistical issues. Continue reading


Polar Bears, Inadequate data and Statistical Lipstick

LipStbear

A recent paper Internet Blogs, Polar Bears, and Climate-Change Denial by Proxy by JEFFREY A. HARVEY and 13 others has been creating somewhat of a stir in the blogosphere. The paper’s abstract purports to achieve the following:

Increasing surface temperatures, Arctic sea-ice loss, and other evidence of anthropogenic global warming (AGW) are acknowledged by every major scientific organization in the world. However, there is a wide gap between this broad scientific consensus and public opinion. Internet blogs have strongly contributed to this consensus gap by fomenting misunderstandings of AGW causes and consequences. Polar bears (Ursus maritimus) have become a “poster species” for AGW, making them a target of those denying AGW evidence. *Here, focusing on Arctic sea ice and polar bears, we show that blogs that deny or downplay AGW disregard the overwhelming scientific evidence of Arctic sea-ice loss and polar bear vulnerability.* By denying the impacts of AGW on polar bears, bloggers aim to cast doubt on other established ecological consequences of AGW, aggravating the consensus gap. To counter misinformation and reduce this gap, scientists should directly engage the public in the media and blogosphere.

Reading further into the paper we find that this seems to be yet another piece of  propaganda to push a Climate Change agenda. In line with the high standards of climate science “communication”, there are over 50 occurences of various forms of the derogatory labels “denier” or “deny” in a mere five pages of text and two pages of references. Such derogatory language has become commonplace in the climate change academic world and reflects badly on the authors who use it.

Continue reading

Brown and Caldeira: A closer look shows global warming will not be greater than we thought

A guest post by Nic Lewis

Introduction

Last week a paper predicting greater than expected global warming, by scientists Patrick Brown and Ken Caldeira, was published by Nature.[1]  The paper (henceforth referred to as BC17) says in its abstract:

“Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (−1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change.”

Patrick Brown’s very informative blog post about the paper gives a good idea of how they reached these conclusions. As he writes, the central premise underlying the study is that climate models that are going to be the most skilful in their projections of future warming should also be the most skilful in other contexts like simulating the recent past. It thus falls within the “emergent constraint” paradigm. Personally, I’m doubtful that emergent constraint approaches generally tell one much about the relationship to the real world of aspects of model behaviour other than those which are closely related to the comparison with observations. However, they are quite widely used.

In BC17’s case, the simulated aspects of the recent past (the “predictor variables”) involve spatial fields of top-of-the-atmosphere (TOA) radiative fluxes. As the authors state, these fluxes reflect fundamental characteristics of the climate system and have been well measured by satellite instrumentation in the recent past – although (multi) decadal internal variability in them could be a confounding factor. BC17 derive a relationship in current generation (CMIP5) global climate models between predictors consisting of three basic aspects of each of these simulated fluxes in the recent past, and simulated increases in global mean surface temperature (GMST) under IPCC scenarios (ΔT). Those relationships are then applied to the observed values of the predictor variables to derive an observationally-constrained prediction of future warming.[2]

The paper is well written, the method used is clearly explained in some detail and the authors have archived both pre-processed data and their code.[3] On the face of it, this is an exemplary study, and given its potential relevance to the extent of future global warming I can see why Nature decided to publish it. I am writing an article commenting on it for two reasons. First, because I think BC17’s conclusions are wrong. And secondly, to help bring to the attention of more people the statistical methodology that BC17 employed, which is not widely used in climate science. Continue reading

US East Coast Sea Level Rise: An Adjustocene Hockey Stick

josh_hockeysticksIn 2011, Andy Revkin wrote an article (archive) entitled “Straight Talk on Rising Seas in a Warming World” (among other articles on the topic), in which he optimistically sought guidance on the topic from a then recent study of U.S. East Coast sea level coauthored by Mann (Kemp et al, 2011).  Joshua Willis told Revkin “that, using patterns in layered salt marsh sediment, [they] found a sharp recent uptick in the rate of sea-level rise after 2,000 years of fairly stable conditions — 2011_kemp_comparea pattern Willis refers to as a “sea-level hockey stick” — an allusion to the suite of studies finding a similar pattern for global surface temperatures (albeit a hockey stick with a warped shaft)”.

However, as so often, the supposed “hockey stick” appeared only after the data had been severely adjusted. The difference is shown at the figure at right.  Unadjusted (raw) relative sea level (i.e. how sea level appears locally – the concern of state planners and policy-makers) in North Carolina increased steadily through the last two millennia, with somewhat of an upward inflection in the 19th century; it is only after heavy adjustment that a HS shape appears.

In this case, the relevant data for local and regional planners is the data prior to adjustment by climate warriors, as I’ll discuss below: this is not a hockey stick but an ongoing increase through the Holocene.

Continue reading

New Antarctic Temperature Reconstruction

Stenni et al (2017), Antarctic climate variability on regional and continental scales over the last 2000 years, was published pdf this week by Climate of the Past.  It includes (multiple variations) of a new Antarctic temperature reconstruction, in which 112 d18O and dD isotope series are combined into regional and continental reconstructions. Its abstract warns that “projected warming of the Antarctic continent during the 21st century may soon see significant and unusual warming develop across other parts  of the Antarctic continent [besides the peninsula]”, but no Steigian red spots of supposedly unprecedented warming.

Long-time CA readers will be aware of my long-standing interest in Antarctic ice core proxies, in particular, the highly resolved Law Dome  d18O series.  One of my first appearances in Climategate emails was a request for Law Dome data to Tas van Ommen in Australia, who immediately notified Phil Jones in Sauron’s Tower of this disturbance in the equilibrium of Middleearth. Jones promptly consulted the fiercest of his orcs, who urged that the data be withheld as follows: ” HI Phil, Personally, I wouldn’t send him [McIntyre] anything. I have no idea what he’s up to, but you can be sure it falls into the “no good” category.”  I’ve discussed incidents involving Law Dome data on several occasions in the past. This is what the data looked like as of 2004: elevated values in the early first millennium, declining up to and including the 20th century.

 

Law Dome – Holocene Perspective

Recently, I’ve commented on many occasions on the benefits of looking at proxy data in a Holocene (10000 year context) rather than just the last 2000 years.  A longer perspective permits one to see Milankovitch factors at work and this is true for Law Dome d18O as well. Although Law Dome d18O analyses were carried out nearly 20 years ago, results have been archived only for the deglacial period (~20000-9000 BP) and for the last 2000 years – shown in the graphic below. The inset shows (unarchived) Law Dome dD values over the Holocene, available only in a panel in a 2000 survey of Antarctic cores (Masson et al 2000).  Though the data is frustratingly (and pointlessly) incomplete, the story is clear: d18O values were very low in the Last Glacial Maximum, then increased fairly steadily for 10000 years reaching a maximum ~9-10000 BP (in the early Holocene), then declined in the past 9000 years. Modern values are neither as high as in the early Holocene, nor as low as the Last Glacial Maximum. Variation over the past two millennia is relatively modest.

Accumulation during the Holocene is more than four times greater than in the glacial period.  Elevation of Law Dome has decreased over the Holocene – an important factor which needs to be accounted for in temperature estimation – Vinther et al 2008 made a really excellent effort at disentangling elevation changes in Greenland d18O data, but no one seems to have made a corresponding effort in Antarctica (including Stenni et al 2017).

Stenni et al 2017 Reconstruction

Stenni et al 2017 calculated a variety of composites from the 112 series considered in their reconstruction, featuring reconstructions weighted by positive correlation to “target” temperature series (which had strong increases in West Antarctic and weak increases in East Antarctica), with negatively correlated isotope series screened out (weight of 0). This is disclosed in SI as follows:

The problem with this recipe is that, when the target has an upward trend (as do key target instrumental series), this methodology has the effect of enhancing the blade-ness of the resulting composite.  The blade bias arises because the series are intrinsically very noisy – but series with too “big” a blade are left in, while series which go down are left out. The defective procedure is made worse when there are a lot of short series, as here.  At least this methodology doesnt turn series upside down (Manng-nam style).

Stenni et al 2017 are somewhat evasive about their results and their graphics contribute to the evasion.  I’ve re-plotted their Antarctic continent reconstruction (decadal version) from archived data in the figure below. Like the Law Dome series, the composite shows elevated values in the first millennium, declining through the last millennium, with the decline continuing well into the 20th century. Values in 1950 and 1960 were among the coldest in the past two millennia, with a very late uptick (1980- 2000). Stenni et al show this series as the dashed orange series in their Figure 8 which has negligible vertical resolution (see inset below).   The very modest blade at the end of this series is almost certainly exaggerated by the defective screening and weighting procedures noted above. But even with their fingers on the scales (so to speak), the main message of the series is that values in the first millennium are consistently elevated above modern values.

Their main reconstruction graphic (their Figure 7) is, if anything, much worse than the panel shown in the above inset, as shown below. It too shows elevated first millennium values, though you’d barely know it from looking at the figure. Its 10:1 horizontal-to-vertical panel size disguises rather than highlights the difference between the first millennium and modern values.

 

By now, we’re all familiar with the fevered prose of abstracts when climate reconstructions supposedly show “unprecedented” modern results. Needless to say, Stenni et al does not contain colorful and excited descriptions of high first millennium values. The lede to their abstract is relentlessly flat:

“Climate trends in the Antarctic region remain poorly characterized, owing to the brevity and scarcity of direct climate observations and the large magnitude of interannual to decadal-scale climate variability. Here, within the framework of the PAGES Antarctica2k working group, we build an enlarged database of ice core water stable isotope records from Antarctica, consisting of 112 records.”

Continuing the abstract, they report “a significant cooling trend” to 1900 CE, followed by “significant warming trends” after 1900 CE in three regions which are “robust” to something or other and which are “significant” in the weighted reconstructions.

Our new reconstructions confirm a significant cooling trend from 0 to 1900 CE across all Antarctic regions where records extend back into the 1st millennium, with the exception of the Wilkes Land coast and Weddell Sea coast regions. Since 1900 CE, significant warming trends are identified for the West Antarctic Ice Sheet, the Dronning Maud Land coast and the Antarctic Peninsula regions, and these trends are robust across the distribution of records that contribute to the unweighted isotopic composites and also significant in the weighted temperature reconstructions.

This is a pretty outrageous spin, given that the continental Antarctic reconstruction continues the downward trend to 1950-60 – despite the use of a defective method which will enhance the most meager blade.  Despite these adverse results, they close with the obligatory warning of “significant and unusual warming” – none of which is evident in their data.

However, projected warming of the Antarctic continent during the 21st century may soon see significant and unusual warming
develop across other parts of the Antarctic continent.

Discussion

As noted above, Law Dome has been a long-standing issue at Climate Audit.

It astonishes me that there is no technical journal article on Law Dome d18O data either for the Holocene or for the past 2000 years. Van Ommen planned to publish the data according to my earliest correspondence with him (2004).  It’s disquieting that longer Holocene data for such an important site remains unpublished.

The characterization of Antarctic ice cores in the 2006 NAS report (discussed at CA here, especially at the press conference) was integral to their attempt to distinguish past warming from modern warming:

This [additional] evidence [of the unique nature of recent warmth in the context of the last one or two millennia] includes …the fact that ice cores from both Greenland and coastal Antarctica show evidence of 20th century warming (whereas only Greenland shows warming during medieval times).

However, this assertion in respect to Antarctica was not supported by their data or analysis. I tried unsuccessfully at the time to obtain a source. The Law Dome series, which was in circulation at the time, showed opposite results: warmth in the late first and very early second millennia and which didn’t show evidence of 20th century warming.

Drafts of IPCC AR4 showed a panel diagram of Southern Hemisphere proxies, but conspicuously omitted the Law Dome series. As an AR4 reviewer, I asked that it be included in the diagram (knowing of course that it showed a result that was opposite to what they were claiming.) The IPCC AR4 lead authors knew this as well and refused to show it in their diagram, concocting a ludicrous excuse. There was a revealing discussion in Climategate emails (discussed at CA here).

The Law Dome proxy series was important in the Gergis reconstruction as well. It met ex ante criteria for inclusion in her reconstruction. It was one of only three Gergis proxies with values in the Medieval period; if it were included in the network, medieval values would have been raised significantly. Rather than let this happen, Gergis concocted ex post screening criteria which excluded Law Dome from her network – see CA discussion here.

 

Reconciling Model-Observation Reconciliations

Two very different representations of consistency between models and observations are popularly circulated. On the one hand, John Christy and Roy Spencer have frequently shown a graphic which purports to show a marked discrepancy between models and observations in tropical mid-troposphere, while, on the other hand, Zeke Hausfather, among others, have shown graphics which purport to show no discrepancy whatever between models and observations.  I’ve commented on this topic on a number of occasions over the years, including two posts discussing AR5 graphics (here, here) with an update comparison in 2016 (here) and in 2017 (tweet).

There are several moving parts in such comparisons: troposphere or surface, tropical or global. Choice of reference period affects the rhetorical impression of time series plots.  Boxplot comparisons of trends avoids this problem. I’ve presented such boxplots in the past and update for today’s post.

I’ll also comment on another issue. Cowtan and Way argued several years ago that much of the apparent discrepancy in trends at surface arose because the most common temperature series (HadCRUT4,GISS etc) spliced air temperature over land with sea surface temperatures. This is only a problem because there is a divergence within CMIP5 models in trends for air temperature (TAS) over ocean and sea surface temperature (TOS). They proposed that the relevant comparandum for HadCRUT4 ought to be a splice as well: of TOS over ocean areas and TAS over land.  When this was done, the discrepancy between HadCRUT4 and CMIP5 models was apparently resolved.

While their comparison was well worth doing, there was an equally logical approach which they either didn’t consider or didn’t report: splicing observations rather than models. There is an independent and long-standing dataset for night marine air temperatures (ICOADS). Combining this data with surface air temperature over land would avoid the problem identified by Cowtan and Way. Further, NMAT data is relied upon to correct/adjust inhomogeneity in SST series arising from changes in observation techniques, e.g. Karl et al 2015:

previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata (18) reveal that some ships continued to take bucket observations even up to the present day. Therefore, one of the improvements to ERSST version 4 is extending the ship-bias correction to the present, based on information derived from comparisons with night marine air temperatures.

Thus, there seems to be multiple reasons to look just as closely at a comparison resulting from this approach, as one from splicing model data, as proposed by Cowtan and Way.  I’ll show the resulting comparisons without prejudging.

Troposphere

Spencer and Christy’s comparisons are for satellite data (lower troposphere.) They typically show tropical troposphere, for which the discrepancy is somewhat larger than for the GLB troposphere (shown below.) The median value from models is 0.28 deg C/decade, slightly more than double observed trends in UAH (0.13 deg C/decade) or RSS version 3.3 (0.14 deg C.) RSS recently adjusted their methodology resulting in a 37% increase in trend  (now 0.19 deg C/decade.)   The UAH and RSS3.3 trends are below all but one model-run combinations. Even the adjusted RSS4 trend is less than all but two (of 102) model-run combinations.

The obvious visual differences in this diagram illustrate the statistically significant difference between models and observations.  Many climate scientists e.g. Gavin Schmidt are deniers of mainstream statistics and argue that there is no statistically significant difference between models and observations. (See CA discussion here.)

CMIP5 and HadCRUT4

IPCC AR5 compared CMIP5 projections of air temperature (TAS) to HadCRUT4 and corresponding surface temperature indices (all obtained by weighted average of air temperatures over land and SST over ocean.)  In this case, the discrepancy is not as marked, but still significant. Median model trend was 0.241 deg C/decade (less than troposphere) while HadCRUT4 trend was 0.181 deg C/decade (Berkeley 0.163).  Berkeley was lower than all but six runs, HadCRUT4 lower than all but ten. Both were outside the range of the major models. As noted above, the basis of this comparison was criticized by Cowtan and Way, re-iterated by Hausfather.

Cowtan and Way Variation

As noted above, Cowtan and Way (followed by Hausfather) combined CMIP5 models for TAS over land and TOS over ocean, for their comparison to HadCRUT4 and similar temperature data. This had the effect of lowering the median model trend to 0.189 deg C/decade (from 0.241 deg C/decade), indicating a reconciliation with observations (0.181 deg C/decade for HadCRUT4) for surface temperatures (though not for tropospheric temperatures, which they didn’t discuss.)

ICOADS NMAT and “MATCRU”

The ICOADS air temperature series is closely related to SST series. There is certainly no facial discrepancy which disqualifies one versus the other as a valid index. There are major and obvious differences in trends between the ocean series and the land series. The difference is larger than in models, but models do project an increasing difference over the next century.

One wonders why the standard indices (HadCRUT4) combine the unlike series for SST and land air temperature rather than combining two air temperature series.  As an experiment, I constructed “MATCRU” as a weighted average (by area) of ICOADS and CRUTEM.  Rather than the consistency reported by Cowtan-Way and Hausfather, this showed a dramatic inconsistency – not unlike the inconsistency in tropospheric series prior to the recent bodge of RSS data.

 

 

Conclusion

What does this all mean? Are models consistent with observations or not?  Up to the recent very large El Nino, it seemed that even climate scientists were on the verge of conceding that models were running too hot, but the El Nino has given them a reprieve. After the very large 1998 El Nino, there was about 15 years of apparent “pause”. Will there be a similar pattern after the very large 2017 El Nino?

When one looks closely at the patterns as patterns, rather than to prove an argument, there are interesting inconsistencies between models and observations that do not necessarily show that the models are WRONG!!!, but neither are they very satisfying in proving that that the models are RIGHT!!!!

  • According to models, tropospheric trends should be greater than surface trends. This is true over ocean, but not over land. Does this indicate that the surface series over land may have baked in non-climatic factors, as commonly argued by “skeptics”, such that the increase, while real, is exaggerated?
  • According to models, marine air temperature trends should be greater than SST trends, but the opposite is the case. Does this indicate that SST series may have baked in some non-climatic factors, such that the increase, while real, is exaggerated?

From a policy perspective, I’m not convinced that any of these issues – though much beloved by climate warriors and climate skeptics – matter much to policy.  Whenever I hear that 2016 (or 2017) is the warmest year EVER, I can’t help but recall that human civilization is flourishing as never before. So we’ve taken these “blows” and not only survived, but prospered. Even the occasional weather disaster has not changed this trajectory.

 

 

Part 2- The TV5 Monde Hack and APT28

In his attribution of the DNC hack, Dmitri Alperovitch, of Crowdstrike and the Atlantic Council, linked APT28 (Fancy Bear) to previous hacks at TV5 Monde in France and of the Bundestag in Germany:

FANCY BEAR (also known as Sofacy or APT 28) is a separate Russian-based threat actor, which has been active since mid 2000s … FANCY BEAR has also been linked publicly to intrusions into the German Bundestag and France’s TV5 Monde TV station in April 2015.

Alperovitch’s identification of these two incidents ought to make them of particular interest for re-examination (CA readers will recall that the mention of Peter Gleick in the forged Heartland memo proved important.)  In each case, including the DNC hack, attribution of the TV5 Monde and Bundestag hacks resulted in a serious deterioration of relations between Russia and the impacted nation – arguably the major result of each incident.

In today’s post, I’ll re-visit the TV5 Monde hack, which took place in April 2015, almost exactly contemporary with the root9B article discussed in Part 1.  It proved to be a very interesting backstory. Continue reading

From Nigerian Scams to DNC Hack Attribution – Part 1

In Crowdstrike’s original announcement that “Russia” had hacked the DNC, Dmitri Alperovitch said, on the one hand, that the “tradecraft” of the hackers was “superb” and their “operational security second none” and, on the other hand, that Crowdstrike had “immediately identified” the “sophisticated adversaries”.  In contrast, after three years of investigation of Climategate, UK counter-intelligence had been unable to pin down even whether the hacker was a lone motivated individual or organized foreign intelligence service.  Mr FOIA of Climategate subsequently emailed several bloggers, including myself, stating that he was a lone individual outside the UK who was a keen reader of Climate Audit and WUWT – a claim that I accept and which is consistent with my own prior interpretation of Climategate data and metadata.

I draw the contrast to draw attention to the facial absurdity of Crowdstrike’s claim that the tradecraft of the DNC hackers was “superb” – how could it be “superb” if Crowdstrike was immediately able to attribute them?

In fact, when one looks more deeply into the issue, it would be more accurate to say that the clues left by the DNC hackers to their “Russian” identity were so obvious as to qualify for inclusion in the rogue’s gallery of America’s Dumbest Criminals, criminals like the bank robber who signed his own name to the robbery demand.

To make matters even more puzzling, an identically stupid and equally provocative hack, using an identical piece of software, had been carried out against the German Bundestag in 2015.  A further common theme to the incidents is that both resulted in a dramatic deterioration of relations with Russia – between Germany and Russia in 2015 and USA and Russia in 2016-2017. Perhaps it’s time to ask “Cui bono?” and re-examine the supposedly “superb tradecraft”. I’ll begin today’s story, perhaps appropriately, with a Nigerian phishing scam.  Continue reading

Guccifer 2: From January to May, 2016

Within the small community conducting technical analysis of the DNC hack, there has been ongoing controversy over whether Guccifer 2 (G2) was a false flag for the Russians, whether G2 was located in the US rather than Russia, whether the G2 files were copied locally rather than hacked, whether G2 was a false flag for the DNC (didn’t hack any documents at all).

In today’s post, I’ll try to shed a little light on the puzzle by presenting a case that metadata  from G2’s cf.7z dossier  shows that, between at least January 7, 2016 and May 4, 2016, Guccifer 2 copied numerous documents (primarily from the Democratic Party of Virginia – DPVA) within a few minutes of the documents being saved.  This strongly suggests to me that Guccifer 2 was a genuine hacker who had indeed installed malware on a Democrat computer, which was then used to automatically exfiltrate documents.

Unlike the ngpvan.7z previously analysed by Forensicator, the copying structure of cf.7z is formidably complex, with evidence of both Unix-type and Windows-type copying, possibly in multiple stages.  Continue reading

Guccifer 2 and “Russian” Metadata

The DHS-FBI intel assessment of the DNC hack concluded with “high confidence” that Guccifer 2 was a Russian operations, but provided (literally) zero evidence in support of their attribution.  Ever since Guccifer 2’s surprise appearance on June 15, 2016 (one day after Crowdstrike’s announcement of the DNC hack by “Russia”), there has been a widespread consensus that Guccifer 2 was a Russian deception operation, with only a few skeptics (e.g. Jeffrey Carr questioning evidence but not necessarily conclusion; Adam Carter challenging attribution).

Perhaps the most prevalent argument in attribution has been the presence of “Russian” metadata in documents included in Guccifer 2’s original post – the theory being that the “Russian” metadata was left by mistake. I’ve looked at lots of metadata both in connection with Climategate and more recently in connection with the DNC hack, and, in my opinion, the chances of this metadata being left by mistake is zero. Precisely what it means is a big puzzle though.

Continue reading