February 10, 2020 JV

Mercury Rising?

In the 1998 film Mercury Rising, Art Jeffries, an undercover FBI agent (Bruce Willis) risks his life to protect Simon Lynch, a young, autistic, savant boy with a gift for deciphering patterns in numbers.  The 9-year old is targeted by government assassins after he cracks a top-secret government code.  If he’s still around, perhaps Simon could decipher the USHCN temperature data set.

In 2010, Fall et al concluded their Surface Stations Project, surveying over 80% of the U.S. Historical Climatology Network (USHCN) stations.

Each station was rated using criteria based on their exposure conditions.  The rating system was employed by NOAA to develop the U.S. Climate Reference Network.  The pie chart above summarises survey results; about 70% of all stations are at level 4 or 5, measuring temperatures with uncertainties greater than or equal to 2o C.

Canadian Professor of Economics, Ross McKitrick et al, conducted significant research and published several peer-reviewed articles on the integrity of the temperature data managed by the Global Historical Climatological Network (GHCN).  The GHCN is comprised of three main global temperature histories: the combined CRU-Hadley record (HADCRU), the NASA-GISS (GISTEMP) record, and the NOAA record.  CRU has stated that about 98 percent of its input data are from GHCN.  GISS also relies on GHCN with some additional US data from the USHCN network, and some additional Antarctic data sources.  NOAA relies entirely on the GHCN network.  Because of this reliance on GHCN, its quality deficiencies affect the integrity of all analysis and reporting.

One of their earlier reports (McKitrick and Michaels, 2007) summarises numerous sampling problems with the GHCN.  These include:

  • After 1990 and again after 2005 there were massive drops in the number of GHCN sampling locations around the world. The GHCN sample for 2010 is smaller than it was for 1919.
  • The sample has become heavily skewed towards airports, (rising from 40% to 50% of station locations in the past two decades in the US network).
  • There have been similar sharp step-like reductions in higher altitude and higher latitude stations with the loss in sampling locations. They have also moved inland, away from coasts.

Solid line: Mean temperature records only; dashed line: Max and Min temperature records.

McKitrick notes that “the sample collapse in 1990 is clearly visible as a drop not only in numbers but also in altitude.  ….  Since low-altitude sites tend to be more influenced by agriculture, urbanization and other land surface modification, the failure to maintain consistent altitude of the sample detracts from its statistical continuity.”

  • Homogeneity adjustments intended to fix discontinuities due to equipment change or station relocations have had the effect of increasing the trend, at least up to 1990. After 1990 the homogeneity adjustments became large and seemingly chaotic with year-by-year volatility dwarfing the magnitude of the decadal global warming signal.
  • This points to a severe deterioration in the continuity and quality of the underlying temperature data.

The following graph shows the result of all adjustments made to historical temperature records.

There are two notable features of this graph.  The first is that the adjustments trend upward. They are mainly negative prior to 1980 and positive most of the time thereafter, effectively “cooling” the early part of the record and “warming” the later record. In other words, a portion of the warming trend shown in global records derived from the GHCN-adj archive results from the adjustments, not from the underlying data.

The second is the increase in the volatility of the adjustments after 1990. The instability in the record dwarfs the size of the decadal global warming signal, with fluctuations routinely going to ±0.1 degrees C from one year to the next.

McKitrick concludes that “after 1990 the GHCN archive became very problematic as a basis for computing precise global average temperatures over land and comparing them to earlier decades.”  Recall that in 1990 many urban and high-altitude station sites disappeared and that many / most of the remaining sites are now located at airports.

Buoy, Oh Buoy!

Developing a consistent set of surface temperature figures, devoid of measurement errors and bias over the past century is important for land, but far more for the sea (which is approx. 70% of the earth’s surface).  Unfortunately, sea surface temperature (SST) measurement is plagued by its own set of issues (more on this later from McKitrick).  SST has been measured by ships (using buckets) since the 1800’s, buoys since the 1900’s and by ships (via engine inlet water temperature sensors) since about 1950, all now managed by the US National Oceanic and Atmospheric Administration.  Satellites have been measuring radiative emissions from oxygen in the atmosphere and converting these to temperature using a variety of algorithms since the 1970’s.  These have their own issues including cloud cover, atmospheric dust, aerosols and calibration.

All historical SST products are derived from the IOCADS – International Comprehensive Ocean-Atmosphere Data Set (http://icoads.noaa.gov/) or one of its predecessors.  McKitrick comments that “ICOADS draws upon a massive collection of input data, but it should be noted that there are serious problems arising from changes in spatial coverage, observational instruments and measurement times, ship size and speed, and so forth. ICOADS is, in effect, a very large collection of problematic data.”  For example:

  • How can the measurements from a British Man-of-War from 1820 be compared to a Japanese fishing vessel from 1920 to a U.S. Navy vessel from 1950?
  • What kind of buckets were used, how much they were warmed by sunshine or cooled by evaporation while being sampled and how often (if ever) were the thermometers and sensors calibrated? For example, a canvas bucket left on a deck for three minutes under typical weather conditions can cool by 0.5 degrees Celsius more than a wooden bucket measured under the same conditions.
  • Are vessel engine inlet water temperature sensors subject to routine calibration checks?

McKitrick continues: “Up to the 1930s, global coverage was limited to shipping areas.  This meant that most locations in the Southern Pacific region…had fewer than 99 observations per decade, with many areas completely blank.  Up until recently the conventional thinking on the sources of SST data from ships was that there was an ‘abrupt transition from the use of uninsulated or partially insulated buckets to the use of engine inlets’ in December 1941, coinciding with the entry of the US into WWII (Folland and Parker 1995).  Consequently, the UK Hadley Centre adjusts the SST record using the Folland-Parker estimated factor, which ends abruptly in 1941, on the assumption that use of bucket-measurements also ended at that point.

More recently, WMO ship metadata has been used to provide a clearer picture of the measurement procedures associated with ICOADS records.  The following graph, taken from Kent et al (2007), shows the changing mix of techniques used in shipping data since 1970. It is immediately apparent that the Parker-Folland analysis was incorrect to assume that bucket-measurements gave way to engine intake measurements ‘abruptly’ in 1945, since as of 1980 they still accounted for about half the known methods for ship-derived ICOADS data.

It is also apparent that from 1970 to 1990 there was a steady increase in the fraction coming from engine intakes. As these data are biased warm relative to the actual temperature, if it had been assumed that the engine-intake fraction was constant whereas it was actually growing, it implies a possible source of upward bias to the trend over this interval.

The efforts of Kent et al. (2007) to digitize the shipping metadata had another payoff shortly thereafter.  In 2008 it was reported in Nature (Thompson et al. 2008), based on the Kent data, that a further problem with SST data had been noted at the 1945-46 transition.

As with the 1941 transition, it was an odd, abrupt step in the ICOADS data.  From 1940 to 1945, the fraction of data coming from US ships rose sharply to over 80%, but with the end of WWII there was a jump in UK data and a drop in US data, with UK contributions going from about 0% to about 50% of the total within one year.  At the same time the ICOADS average fell by about 0.5 degrees C.

The climatological community appears to be uncertain about what to make of this oceanic ‘blip’ and, possibly, a similar one in the land record.  In the Climategate archive, email 1254108338.txt dated September 27 2009 from Tom Wigley to Phil Jones and Ben Santer presents a debate on how to best to eliminate the ‘blip.’

McKitrick concludes this section of his report with the comment: “as of last September [2009], large and arbitrary changes to global SST series were being debated in response to the Thompson et al. paper, either of which could fundamentally change the picture of mid-century warming and possibly create new discrepancies with climate models.”

Through a series of reports, McKitrick et al cast significant doubts over the integrity of both the LST and SST records and had the potential to call into question many of UN IPCC recommendations.  So what did lead authors do with McKitrick’s findings?  In a Climategate email between IPCC Lead Author Phil Jones and his colleague Michael Mann, dated July 8, 2004, Jones confided that he and co-author Kevin Trenberth were determined to keep McKitrick’s evidence out of the IPCC report.

“I can’t see either of these papers being in the next IPCC report.  Kevin and I will keep them out somehow – even if we have to redefine what the peer review literature is” wrote Jones.

Dr. Reto Ruedy of GISS also once confessed in a Climategate e-mail that GISS had inflated its temperature data since 2000 on a questionable basis, whereby,

“NASA’s assumption that the adjustments made the older data consistent with future data . . . may not have been correct.”


Leave a Reply