Featured Product
This Week in Quality Digest Live
Metrology Features
Fred Miller
University of Arkansas Division of Agriculture leads USDA-NIFA research partnership
Daniel Croft
Noncontact scanning for safer, faster, more accurate, and cost-effective inspections
National Physical Laboratory
Using Raman spectroscopy for graphene and related 2D materials
Keith Irwin
Pros and cons of X-ray and CT techniques
Peter Büscher
Best practices for fluid sampling in cleanliness analysis

More Features

Metrology News
Allows end-users to bring 3D measurement close to the production line
Strengthens data analysis and AI capability
Makes it easy to perform all process steps, from sample observation to data analysis
General, state-specific, and courses with special requirements available
New features revolutionize metrology and inspection processes with nondimensional AI inspection
Engineering and computer science students receive new lab and learning opportunity
Supports robots from 14 leading manufacturers
Ultrasonic flaw detector now has B/C scan capability, improved connectivity, and an app to aid inspection
Tapping tooz for AR/VR competence center

More News

Ryan E. Day

Metrology

Metrology in the Hot Seat

Quality tools for measuring climate change?

Published: Friday, September 30, 2011 - 11:49

Now I don’t mean to brag, but I make a mean filet mignon... usually. The preparation always involves a good soaking in my secret marinade recipe (McCormick’s and red wine) then grilling on the BBQ turned up to its “ludicrous” setting. So why the occasional extra char in the char-broil?

Two reasons. One, because the tool I use to measure cooking time is none other than my own seat-o’-the-pants internal clock, which seems to have an intermittent calibration issue not covered by warranty; and two, I don’t use a thermometer at all.

If I were a professional chef, then a top-grade, instant-read, digital thermometer and the know-how to interpret the temperature readings would be imperative. As a weekend warrior, I should at least use a food thermometer and a decent cookbook. But I don’t. And who cares, other than my poor wife, who is then forced to forage for her evening’s nutrition?

This casual attitude does not work so well when applied to more serious metrological concerns. There are more important issues that deal with temperature measurements and their interpretation. One of these is climate change.

It’s obvious that the Earth has been through many drastic climatological changes throughout history (ice age/no ice age), and there’s no reason to believe that these cycles will ever end. Homo sapiens, however, are imbued with more curiosity than nine cats in a burlap sack, endless opinions, and an unexplainable compulsion to measure everything in the known universe. Measuring the Earth’s temperature is a natural target.

So what tools do we use to measure the Earth’s temperature, and more important, who takes the readings, who keeps the records, and who analyzes the data? Before we can make an informed opinion on the subject of climate change, we must vet the data first. Certainly no one would argue that a forecast is only as good as the data used, and analysis of those data is absolutely dependent on the data that are included.

This brief but documented synopsis will allow us to understand where the main body of climate change information comes from and how it is handled on its way to being disseminated to the general public. It will also allow us to compare the methodology behind climate change research and private-sector temperature metrology and see how they stack up next to each other.

Data-gathering tools on the ground

Stevenson screen
In 1890 the U.S. Congress established the U.S. Weather Bureau, which formed the Cooperative Observer Program (COOP). Scattered across the United States and manned by volunteers, some 5,200 COOP stations are set up to record and report surface weather conditions to the weather bureau. COOP stations employ a Stevenson screen to gather raw data (figures 1 and 2). The screen is a tidy box protecting a cluster of instruments such as a thermometer and barometer. The COOP program, utilizing Stevenson screens, is still a major source of information today.

https://lh5.googleusercontent.com/gvVLHdm-A5xdExsCGvhFYlF4WvYyqat0zMC9b-4RdafWv3qYjA-6eHWfs5qy53HulayaiVCsJzcwLCFfq2ib15BdVAR6prdJizO-liUd-3Un-o3Oizc
Figure 1: Some COOP stations are set up like this.

https://lh4.googleusercontent.com/eel9T5ELDWL8cSXu3nK7Eh1uFDy0WYM9AUnETXpAEdJKt5FMRu-_FrUk-nHQpiJ37uSVfP-j3f5gRbyf4le2U_TEjdp56Tj0v-nGKRoz4enNQ2PJQac
Figure 2: Others operate in less-than-ideal locations that may skew overall trend charts, an issue that was studied by the National Climatic Data Center.

By 1997, the National Climatic Data Center (NCDC), part of National Oceanic and Atmospheric Administration (NOAA), had some serious questions concerning the adequacy and deterioration of the COOP data-collection efforts. Some of these concerns were a result of their own studies, and some came, apparently, as a result of NCDC investigating complaints made by some climatologists regarding instrument placement and data gathering. The NCDC addresses some of those questions on its frequently-asked-questions (FAQ) page.

Automated weather stations
A joint effort by the National Weather Service (NWS), the Federal Aviation Administration (FAA), and the Department of Defense (DOD) began in 1991 and eventually developed into the Automated Surface Observation System (ASOS). These are state-of-the-art, automated weather stations that automatically relay up-to-the-minute data 24/7, 365 days of the year, and add valuable surface-weather information to the COOP data set (figure 3). As of 2000, NOAA claimed 860 ASOS stations as commissioned.

https://lh3.googleusercontent.com/K-UilJUUZbDlH5WZTjz9_xsINVBc5qUe48ngD2D3L5geFN6t3tsaONKQpgMOoyDMXmO5GhC75-tUqF1CF16cGtGaM_Jax5YxvxBs8RcwT7bpwQcVJcc
Figure 3: An Automated Surface Observation System (image courtesy of NOAA)

Data-gathering underground

Paleoclimatology
For climate information predating modern records, which are only available for about the last 150 years, scientists turn to paleoclimatology. Using a variety of sources such as tree rings, ice and Earth-core samples, coral reefs, and fossils, experts are able to form a picture of Earth’s climate history as far back as 5 million years.

Organizations graph and report on climate trends back to 1990, others back to 1880, still others go back 1,000, 2,000, even 1 million and 5 million years ago, depending on their particular viewpoint of what is relevant. That last bit is important. If you take the time to look at each of the links provided, you can see that the methodologies and data sets chosen do tend to align with the viewpoint of the author... or is it the other way around?

Late in 1999, several scientists working for the United Nations’ Intergovernmental Panel on Climate Change (IPCC) attempted to reconcile certain paleoclimatic anomalies as they prepared its annual report for the U.N.’s World Meteorological Organization (WMO). Their database was hacked, and internal e-mails revealed a struggle to manipulate data to correspond with the IPCC’s conclusions. The resulting climategate left an indelible stain on the subject of temperature measurement reports.

Data-gathering tools in the sky

Satellites
The launch of NASA’s first weather satellite in 1960 gave scientists a whole new data set to play with. Between 1970 and 1976, NOAA’s logo graced five of what were deemed the improved satellites. NOAA has continued to improve its satellites’ capabilities to detect cloud, land, and ocean temperatures, as well as monitor the sun’s activities. Weather satellites are key tools in forecasting weather, analyzing climate, and monitoring weather hazards.

https://lh4.googleusercontent.com/STl0wCGFmir-GYvyslapfRGuxlMA3ysozbJbc9kmMHRl8K6SMeZga8OCn3mP_VZ5ApMMkX4zZ6CzzqXhWemz6ZNV8s8w7McReD-kXdnTtDASBvIZS9k
Figure 4: Temperature information via satellite (image courtesy of Gary Strand, National Center for Atmospheric Research)

Data storage

We’ve looked at some of the data-gathering methods. Some seem fairly robust; some have been called into question. But all that data-gathering isn’t worth anything unless the data are handled properly. The main players in the data-gathering and -storing arena seem to be the Goddard Institute for Space Studies (GISS), NOAA, and the University of East Anglia’s Climatic Research Unit (CRU) in Great Britain. NOAA boasts one of the longest-running data records anywhere with NCDC's U.S. Historical Climatology Network (USHCN). The USHCN is a data set of monthly averaged maximum, minimum, and mean temperature, and total monthly precipitation. As NCDC describes on its FAQ page, raw data are adjusted to account for biases before they are entered into the USHCN database. For it's part, as can be seen on its updates page, GISS adjusts which data are included in its calculations to reflect the institution's ongoing changes in what it considers valid information.

It is here that many cynics begin to drool, and rightly so. Often, sometimes for reasons unexplained, the data have been adjusted. Sometimes, the adjustments make sense, such as some of the adjustment explained on the NCDC FAQ page. At other times, not so much. Sometimes, as pointed out by meteorologists Joseph D’Aleo and Anthony Watts in their paper, “Surface Temperature Records: Policy-Driven Deception,” no explanation has been given for the data manipulation.

No problem; just grab the data and look at them yourself. In many cases that is doable, and I would encourage doing so. However, when Roger Pielke Jr., a science professor at the University of Colorado pressed the CRU for raw surface temperature records, the university could not provide it, stating (paragraph 4) “Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e., quality controlled and homogenized) data....”

To some, this might seem a minor thing. But imagine if your customer wanted you to supply the raw data you used as part of your quality assurance testing, and you had to tell them that it was lost.

Climate models

As a good cookbook is to a cook, an accurate atmosphere-ocean general circulation model (AOGCM) is to those attempting to forecast climate change. AOGCMs are computer simulations built on parametrized input including, but not limited to, clouds, convection, carbon cycling, ice thickness, and ocean convection.

A models’ accuracy should be able to be verified by simulating historical climate periods. Meaning, input the raw data from a past time period, and see if your model comes up with output that matches known outcomes.

By 2008 there were obvious discrepancies between model predictions of the relationship between surface to tropospheric temperatures and the measured data. Robert J. Allen and Steven C. Sherwood of the Department of Geology and Geophysics at Yale address these discrepancies and suggest changing measuring methods (i.e., measure wind instead of temperature) rather than revising the models. Commenting on the discrepancies, NASA, whose satellites provide tropospheric temperature data, suggested instead, “A computer model is only as reliable as the physics that are built into the program. The physics that are currently in these computer programs are still insufficient to have much confidence in the predicted magnitude of global warming...”

Climate research report card

So what does all of the above mean? Well, if you work in an industry that depends on accurate measurements, where your customers hold you accountable for the quality of the product they receive based on your own testing, then you might see some problems with how climate data are handled. If we are going to take global warming seriously, the data have to be treated seriously, and we have to have confidence in the scientists and institutions that are producing the data analyses.

Unfortunately, if we compare some of the methods employed in climate change research with those used in current private-sector metrology, the report card is nothing short of dismal.

To make such a comparison, let’s use environment chamber mapping (ECM) vs. climate change research (CCR) as examples. Unlike the Earth’s open atmosphere, chambers are meant to be controlled enclosures; however, the information required and the equipment used for data gathering are very similar. In any case, process methods share many quality control aspects regardless of the area of industry.

Data acquisition
ECM uses multiple sensors, so does CCR. If, however, an ECM company removed half of the sensors midtest and supplanted data from the remaining sensors to fill a predetermined placement grid, as happened in CCR (according to D’Aleo and Watts), that company wouldn’t be in business very long. At a minimum, wouldn’t we want to know exactly why that occurred, and what potential impact such actions would have on the accuracy of our system, whether it’s an ECM or a CCR?

Data storage
If an ECM company lost or destroyed the original data from a test, as happened in CCR, it would negate any further testing for that application. Again, I doubt if clients would be too willing to give repeat business. It’s a fact of life that data sometime do get lost or destroyed, but if you can't reproduce a model, scenario, or data analysis because the data are no longer available, then those data must be left out of any larger calculations, perhaps set aside for study purposes only. It may be a setback in your analysis, but I don’t believe that in all honesty you should be making claims that can no longer be substantiated due to loss of data.

Modeling
Reliable modeling is a cornerstone of chamber development. Without it many resources are unnecessarily wasted. If our theoretical ECM company suggested changing the metrics to explain why its model did not coincide with observed new data, its reputation would be diminished, to say the least. If it were discovered that an ECM company’s employees were collaborating to manipulate model reports to make its model results match previous known data (such as happened in climategate), I venture to say they would be fired.

Conclusion

Although climate change and global warming reports seem endless and appear to come from every institution imaginable, they all share one thing in common: The source of large chunks of their information are data that have already been adjusted by the National Oceanic and Atmospheric Administration’s NCDC.

The reports referred to most often in the news and online come from:
• National Oceanic Atmospheric Administration (NOAA)
• National Climatic Data Center (NCDC)
• United States Historical Climatology Network (USHCN)
• Global Historical Climatology Network (GHCN)
• International Panel on Climate Change (IPCC)
• World Meteorological Organization (WMO)
• Hadley Institute (HadCRUT)
• Goddard Institute for Space Studies (GISTEMP)

It may appear as if there are numerous independent sources of information, but many of the data crumbs lead back to NOAA again and again:

The NCDC is listed as one of the services prepared by the NOAA.

The USHCN has cited its principle investigators as NOAA personnel.

The HadCRUT report cites NOAA, WMO, and GHCN among its station sources.

The GISTEMP report lists the GHCN and USHCN as its first two sources for station data.

Even the IPCC and WMO use NCDC data, among others.

In other words, when you hear about a report from one of these organizations, you are really hearing about the same adjusted data over and over again. In many cases the raw data no longer exist, and yet these institutions continue to publish reports based on adjusted data. This appears to present a consensus among scientists, but if that consensus is in fact based on data served up by the multiple arms of one organization, then that organization’s reliability becomes essential to the research. We have to trust the data, how they were collected, and how they were analyzed.

The ramifications of climate change are too far-reaching to receive the short shrift of faulty processes. As professor Pielke puts it, “... because climate change is important and because there are opponents to action that will seize upon whatever they can to make their arguments, does not justify overlooking or defending this degree of scientific sloppiness and ineptitude. Implementing successful climate policy will have to overcome the missteps of the climate science community...”

Considering NCDC’s failing grades in data control and process management, it and its affiliated institutions may have corrupted the data and research badly enough to render the last 20-year data set worthless. Its handling of the data has, in many respects, hampered scientists from producing any kind of reliable long-range climate model.

And that’s no way to go about measuring anything.

Discuss

About The Author

Ryan E. Day’s picture

Ryan E. Day

Ryan E. Day is Quality Digest’s project manager and senior editor for solution-based reporting, which brings together those seeking business improvement solutions and solution providers. Day has spent the last decade researching and interviewing top business leaders and continuous improvement experts at companies like Sakor, Ford, Merchandize Liquidators, Olympus, 3D Systems, Hexagon, Intertek, InfinityQS, Johnson Controls, FARO, and Eckel Industries. Most of his reporting is done with the help of his 20 lb tabby cat at his side.

Comments

Skeptical scientists look at the data...

I came across this day-old news by chance today.

The Berkeley-based group of AGW-skeptical scientists, the Berkeley Earth Surface Temperature Project, is releasing the results of their two-year study. The Wall Street Journal published a summary by the study's leader:

http://is.gd/mg2iEV

When we began our study, we felt that skeptics had raised legitimate
issues, and we didn't know what we'd find. Our results turned out to be
close to those published by prior groups. We think that means that those
groups had truly been very careful in their work, despite their
inability to convince some skeptics of that. They managed to avoid bias
in their data selection, homogenization and other corrections.

 

Learning from best practices

I am sure that we would all like to see cross-fertilization of best
practices in science and quality, particularly given that most of such fertilization has historically been from science to quality, but not the other way around. Mr. Day's cautions and
recommendations should be well-received.

Unfortunately, the critique based on best-practices is couched in a
framework of unsupported accuations (e.g. that data is deleted) and verifiably false statements (e.g. the
gross mis-characterization of actions leading up to "climategate"). I read Quality Digest to learn best practices that I can apply in my
work and daily life. This article held great promise for me as an
application of metrology best practices to a challenging real-world
application, but I am left dissatisfied with the limited take-away and
misinformation.

I look forward to the next installment, with the hope that there will be a greater focus on what is currently practiced in the climate science community, the gap between current practice and best practices, and the challenge of implementing best practices with so much data coming into so many different organizations from so many different sources.

Not being an expert in metrology, I would also be interested in a similar comparison of current practice at the LHC and best practices, given, for instance, that data is necessarily deleted and the recent dust-up over pre-publication results indicating faster-than-light travel times for neutrinos.

Missing The Point?

Interesting responses; to say the least. My personal takeaway...research conducted on controversial subjects inevitably brings out distractors who will attempt to discount the analysis and conclusions of that research so as to have others disregard it. I did not sense that the author was taking a personal stance on global warming. To me, he was using this subject to point out that, regardless of what you are studying; if you want to use your research to affect change, your research needs to be as valid and bullet-proof as possible. Otherwise, you open the door for those who would discount it.


The use of questionable data gather methods, of intentional data manipulation, of pre-determining what will be considered "value-added" information, and the tailoring of conclusions; as Quality professionals, aren't we supposed to bring these things to light when we encounter them? No matter how just the cause, poor methods are poor methods. Great article!

Where's the evidence ?

Are there any alarmists here who are brave enough to attempt to answer the central question:

What is the evidence that man has caused any of the warming since the Little Ice Age ?

The IPCC has none.  Perhaps an alarmist can help out ?

So now I get political opinions in my Quality Digest email?

Why are the editors of Quality Digest taking sides on this political hot potato issue?  Day's article is clearly one-sided with respect to climate change and veers far off the subject of metrology.  Comparing measurements in a controlled lab environment chamber to uncontrolled field weather station measurements is silly.  Even data collected in highly controlled conditions can have problems, but field data is particularly problematic.  Since no measurement data is perfectly accurate, a more interesting and relevant lesson from the climate change debate would be how to cope with the data problems in objective ways.  For instance, Richard Muller of BEST explained clearly in his March 31, 2011 Congressional
testimony that the analysis of the "poor" quality weather station data showed it was "virtually indistinguishable from that seen in the “good” stations."

It's about profit, not the climate

Excellent article.  Evaluating the measurement system is a cornerstone of data collection and analysis.  One of the first steps in any scientific investigation is GR&R of the measurement system.  If the measurement system is questionable, then so is all the data and analysis that follows.  Garbage in = garbage out.

 

I think everyone knows that the global warming debate is more about politics than it is about science.  If you want to understand what's really driving the agenda - follow the money.  Carbon trading in Europe is big business and the Chicago carbon exchange almost got off the ground.  There's a lot of money invested in global warming and the investors are not going to let those pesky facts get in the way of cashing in.

"Follow the money"

It cuts both ways...

http://www.independent.co.uk/environment/climate-change/thinktanks-take-oil-money-and-use-it-to-fund-climate-deniers-1891747.html

ExxonMobile and other special interests who have deep pockets and who are deeply invested in the status quo are also known to be funding skeptics, deniers and PR organizations in an effort to prevent action that could mitigate the impact of climate change.

Metrology in the Hot Seat

Just go back to sleep, children. There's nothing to be frightened about and there never will be.

Your Article

Great article. The presentation of global warming data is an excellent demonstration of the necessity of objectivity in science. Lack of objectivity results in poor science. Whenever scientists adjust the data with inadequate explanation, the data become questionable and the conclusions drawn from the data are even more questionable. Questionble conclusions are not only unacceptable, they may result in damage.

While I believe that we human beings impact our environment, I am still skeptical regarding the level of impact. I need objective science to help me understand topics such as global warming -- there seems to be less and less of the objective science available.

Keep up the good work!

Climategate myth

You quote old, preliminary press coverage of the so-called Climategate scandal.  Your readers might like to follow up after some analysis: http://ossfoundation.us/projects/environment/global-warming/myths/climategate.  Science should always allow for questioning, and should be able to answer objections or improve its methodology.  However, your commentary smacks of the stuff intended to create FUD in the face of overwhelming scientific consensus.  Reminds me of the articles that the tobacco companies funded to cast doubt on the overwhelming scientific evidence that smoking has many negative effects on health.

The world's greatest scam

What "consensus" ?  The consensus of a bunch of politicians ?  Yes, politics is based on concensus, science is not.

The temperature record for the past 100 years is very poor and has been subject to much manipulation.  In reality, when urban heat island effect, selective removal of stations, and data scamming have been removed, there probably has been around 0.4 degree C warming on average, if an average temperature has any meaning.  Most importantly there is not a shred of actual evidence that any of this has been caused by man.  In fact the rate of man's CO2 output increased 1200% after 1945, but warming has not increased.  Even Dr Jones, the man responsible for the IPCC's data admits there has been no warming for the past 15 years.

Scam? Not So Much...

Consensus is one of the cornerstones of science. This is not, however, the same consensus used in politics or everyday life. Instead, it is a consensus derived from data and reproduction of experiments that an idea--a hypothesis--is correct. It is the expression of scientists that a hypothesis is (a) scientifically testable and falsifiable and (b) that it has not been falsified. Scientists no longer argue over the validity of Newton's hypothesis on the gravitation force because there is broad consensus that the hypothesis is correct (as far as it goes). Likewise, scientists no longer argue over the geocentric model of the universe because there is broad consensus--derived from data and mathematical analysis--that the hypothesis is false. This is a much weaker form of consensus than we are used to thinking about in politics and everyday life.

Dr. Jones's statement is rather more nuanced than you make out. He was asked if he agreed that there has been no warming since 1945, and he responded "yes, but only just." He went on to state that there has been a clear trend toward warming, that, given the variation since 1945, the difference does not exceed a 95% confidence level, and that from other lines of evidence he is "100% confident that the climate has warmed."

Removal of stations is done to compensate mainly for excessive increases in recorded temperatures. Adding the removed stations back in would result in a larger estimate for mean temperature.

"Data scamming" is a rather vague charge, but you might read one of the half-dozen independent investigations into "climategate," all of which found no evidence of the manipulations that critics accused the CRU of. The only criticisms resulting from those investigations revolved around openness and the general failure to release raw data.

The urban heat island effect is fully accounted for in models; the scientists who you accuse of being so clever as to manipulate data are not so stupid as to ignore that. In fact, urban temperature trends and rural temperature trends show exactly the same pattern of increase, offset by a relatively small amount.