MEASURE FOR MEASURE
Understanding and communicating
by Elizabeth J. Gentry and Georgia L. Harris
Editor’s note: This is part one of a two-part series that explores accuracy in measurement results. Look for part two in July’s column.
Many decisions are based on accurate measurement results, such as: "Should medicine be prescribed for high cholesterol or high glucose?" or "Should a measuring instrument or standard be adjusted to meet tolerances?"
The answers are based on measurement results. And as a patient, scientist, citizen or policymaker, we make assumptions about the accuracy of measurement results in reports and calibration certificates. We assume they’re good, right, or to say it more correctly, "They’re accurate." But note that accuracy is often defined as hitting the center of a target or true value.
One of our colleagues regularly says, "The only true value is on a sign above a hardware store." But people who use measurements often trust the accuracy of their measurement results—usually without question, believing the results are "good and right."
A measurement result alone is incomplete without some assessment and measure of reported uncertainty. People can estimate the temperature outside on a warm spring day within a few degrees based on their experience. But if we use a thermometer, our first hope is that it’s accurate and gives us the correct or right temperature. After this, we must consider the resolution of the standard: "Is the readability of the thermometer 1 degree Celsius, 0.1 degree or 0.01 degree?"
Our confidence that the results are right will depend on the readability or resolution of the standard or measuring instrument. Our confidence shouldn’t be based on a calculator or spreadsheet giving us a calculated value to 15 decimal places when the resolution or uncertainty is a fraction of that.
Repeatability of an instrument or standard also is a variable of concern. Many people naturally repeat measurements to get a sense of whether multiple values agree. We use simple measurements in daily life, such as stepping on a scale to monitor your weight or checking a vehicle’s mileage to calculate fuel efficiency.
Assigning uncertainty to a measurement result is a rigorous, documented and validated process that is assessed nearly as often as the measurement results themselves. Measurement scientists often use internationally accepted procedures to obtain standardized measurement results. They also use the Guide to the Expression of Uncertainty in Measurement for evaluating and reporting associated uncertainties.1
The readability (or resolution and repeatability) of measurement results gives a sense of confidence (or lack thereof), and these also have associated measures of uncertainty. It’s a wise practice to ask for the measurement uncertainty and use it to assess the quality and precision of a measurement result. Uncertainty values provide confidence in the measurement result: It quantifies the boundaries or limits within which a measurement result should agree with a true quantity value.
Terms and communication
Accurate measurement results and associated uncertainties must be communicated. This could be in a newspaper, a scientific paper or on a calibration certificate. This also means it’s critical to have accuracy in our words and measurement results.
Guiding documents help standardize communication: The International Vocabulary of Metrology (VIM) provides guidance on terms used with measurement and calibration results.2 When measurement professionals use terms such as "accuracy," "traceability," "uncertainty" and "reference standards," they have specific meanings that should be used by every scientist.
For example, the VIM defines "accuracy"—as it’s related to a measurement result—as the "closeness of agreement between a measured quantity value and a true quantity value of a measurand." According to the VIM, "measurand" is "the quantity intended to be measured." This definition of accuracy also includes three explanatory notes:
- "The concept ‘measurement accuracy’ is not a quantity and is not given a numerical quantity value. A measurement is said to be more accurate when it offers a smaller measurement error."
- "The term ‘measurement accuracy’ should not be used for measurement trueness, and the term ‘measurement precision’ should not be used for ‘measurement accuracy’, which, however, is related to both of these concepts."
- "‘Measurement accuracy’ is sometimes understood as closeness of agreement between measured quantity values that are being attributed to the measurand."3
If measurement results between or among laboratories are compared, scientists must be able to talk about the same things. This is why standardized definitions are essential: They can prevent confusion in communicating measurement results.
Units, symbols and results
Measurement results must communicate proper quantities, units and symbols. Many countries adopt the International System of Units (SI, also known as the metric system) as the reference basis for measurement results. There also is a reference document for presenting measurement units, symbols and results.4
The U.S. Metric Program of the National Institute of Standards and Technology (NIST), Office of Weights and Measures, helps implement the national policy to establish the SI as the preferred system of weights and measures for U.S. trade and commerce. It provides leadership and assistance on SI use and conversion to federal agencies, state and local governments, businesses, trade associations, standards-development organizations, educators and the general public.
NIST Special Publication (SP) 3305 and NIST SP 8116 provide the legal interpretation of and guidelines for SI use in the United States. These publications provide standardized guidance on how measurement units and results should be presented in writing.
We like to ask, "If the measurement scientists don’t get the communication of measurement results right, who will?" Regularly reviewing measurement results and uncertainties on calibration certificates and in laboratory documents yield numerous errors that can negatively affect interpretations of results by users.
Errors are often observed in the following situations:
- Measurement uncertainties are not included, are incomplete, are inaccurate, or are not properly rounded.
- Incorrect terminology is used.
- Typos are left uncorrected.
- Unit conversions are wrong.
- Incorrect units and symbols are presented, or correct units are inconsistently used.
We refer to these errors as "black dots." To customers, a black dot on a clear page is what they notice. This blemish is what they will remember, regardless of the other accurate information presented. Errors in reporting results can lead to confusion or bad decisions by users—often with critical effects. Black dots can destroy laboratory credibility.
There are examples of black dots in daily life and news headlines, such as:
- In 1998, NASA’s Mars climate orbiter was lost after a failure to communicate requirements and convert measurement units from two measurement systems.7
- In 2003, Disneyland Tokyo’s Space Mountain roller coaster accident highlighted a scenario in which axle-and-bearing design specifications were converted to metric units and implemented in the ride. After time passed, routine maintenance called for bearing replacements. Instead of being replaced with metric-designed bearings, they were replaced with the incorrect size based on the original, nonmetric design. This created a gap between the axle and bearing. Eventually, the extra vibration and stress caused the axle to fail, derailing the roller coaster. Luckily, no passengers were injured.8
Document control, version control and archiving records are essential tools for ensuring changes made over time are effectively communicated to all personnel affected by a change. Failure to adequately control laboratory documents, such as calibration certificate templates, can be the root cause of black dots that are released to customers.
Avoiding black dots is fundamental to ensuring communication of accurate measurement results. Reviewing for typos, grammatical errors, accurate terminology, completeness, and use of appropriate measurement units and symbols is essential. The second part of this article will offer suggestions to improve the quality, accuracy and communication of measurement results.
- Joint Committee for Guides on Metrology (JCGM), Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement, first edition, 2008, http://tinyurl.com/evaluationofmeasurement.
- JCGM, International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), third edition, 2012, http://tinyurl.com/vimterms.
- The International Bureau of Weights and Measures, The International System of Units (SI), eighth edition, 2006, http://tinyurl.com/international-si.
- Barry N. Taylor and Ambler Thompson, eds., The International System of Units (SI)—Special Publication 330, 2008 edition, National Institute of Standards and Technology (NIST), http://tinyurl.com/nistsp330.
N. Taylor and Ambler Thompson, eds., Guide
for the Use of International System of Units (SI)—NIST Special
Publication 811, 2008 edition, NIST, http://tinyurl.com/
- U.S. Metric Association, "Unit Mixups," www.us-metric.org/unit-mixups.
Elizabeth J. Gentry is metric coordinator for the National Institute of Standards and Technology (NIST), Office of Weights and Measures in Gaithersburg, MD. She earned a bachelor’s degree in biology from the University of Central Oklahoma in Edmond.
Georgia L. Harris is program leader at NIST, Office of Weights and Measures in Gaithersburg, MD. She earned a master’s degree in technical management from Johns Hopkins University in Baltimore. Harris is a senior member of ASQ and the committee secretary of the Measurement Quality Division.