METROLOGY AND CALIBRATION

1.0 INTRODUCTION

Metrology is the science of measurement. The process of inspecting and maintaining measuring equipment is referred to as a calibration program. Calibration includes the comparison of measuring equipment to a standard and possible adjustments based on these activities. The calibration program should ensure that measurements made in the design, manufacture, and testing of product meet necessary accuracy and reliability requirements. Calibration of measuring and test equipment may be referred to as measurement assurance. It is important that product quality tests are performed with measuring and test equipment that is capable of making accurate and reliable measurements.

The overall purpose of a measurement assurance program is to assist in ensuring that product quality requirements are met. The goal of the manufacturer is to meet or exceed the quality requirements in the most cost efficient manner possible. Measurement testing is a cost intensive process, and it may take time before the benefits outweigh the costs. It would not be cost efficient to calibrate each measurement device every day, nor would it be good practice to calibrate or test each device only once per decade. Equipment that has not been calibrated regularly may fall out of specification, and product quality will suffer. The key is finding the balance between the lowest operating cost and the highest quality product.

The intent of the following material is to provide an overview and insight into current practices, and highlight some of the requirements of quality standards such as the ISO 9000 series. It is not intended to provide complete and comprehensive information on the wide range of practices and applications in the fields of metrology and calibration.

 

2.0  ISO 9000 REQUIREMENTS FOR CALIBRATION

The ISO 9000 series standards require the identification of measurements that are made, the level of accuracy required and the instruments that are used. The calibration program is an integral part of the overall quality system. The requirements are stated in element 4.11, control of inspection, measuring and test equipment.

The ISO 9000 series standards may be described as a framework for determining which measurements apply, what instruments are used and whether or not these instruments are capable of making the required measurements. ISO 9000 also requires that measurement uncertainty is known and is consistent with measurement capability..

Documented procedures are required for the calibration program. The instruments must be calibrated against certified equipment having traceability to nationally recognized standards. Equipment calibration status shall be identified and records must be maintained. Environmental conditions, handling, preservation and storage must be suitable for all calibration activities.

The intent of all these requirements is to ensure consistent levels of product quality.

 

3.0  THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY (NIST)

The National Institute of Standards and Technology was established by Congress to assist industry in the development of technology. NIST focuses on product quality, modernization of manufacturing processes, product reliability and facilitation of rapid commercialization of products based on new scientific discoveries. An agency of the U.S. Department of Commerce's Technology Administration, NIST's primary mission is to promote U.S. economic growth by working with industry to develop and apply technology, measurements, and standards. It carries out this mission through a portfolio of four major programs:

  • Advanced Technology Program
  • Manufacturing Extension Partnership
  • Laboratory Program
  • The Malcolm Baldrige National Quality Award.

 

4.0 THE CALIBRATION PROGRAM

4.1  Basic Elements of a Calibration Program

Depending on the results of previous calibrations, the calibration interval may be shortened to ensure continued accuracy or may be lengthened when warranted. The purpose of having a calibration program is to maintain, and in some cases, improve product quality and meet customer requirements.

Typical areas and applications, which require that instruments be calibrated, include:

  • Product quality
  • Designed experiments
  • Environmental testing
  • Testing against predetermined specifications
  • Safety
  • Failure Mode Analysis

 

4.2 Measuring Equipment Cycle

 

5.0 TRACEABILITY

Traceability is a term used to describe the unbroken chain of comparisons that qualify measurement equipment to national or international standards. A quantity is measured with measurement equipment. Measurement equipment is calibrated with standards. A standard can be calibrated with a primary standard or it may be calibrated with another standard, known as a transfer standard. Transfer standards are calibrated by NIST or other internationally recognized standards organizations. Traceability certificates should state which standards have been used to calibrate the instruments. Primary standards are standards of the highest accuracy available. The following diagram illustrates the hierarchy of the traceability chain

.

 

6.0 MEASUREMENT ERROR and MEASUREMENT UNCERTAINTY

The measurement error is the estimated amount that a measured value differs from its true value. Measurement errors can stem from equipment, operators, test design and various other factors. Many are difficult to identify and quantify. To increase the accuracy of measurements, errors must be minimized. In order to develop uncertainty statements, suspected measurement errors are assigned estimated probability values.

It can never be certain that the measured value of a reading is the true value. Measurement readings are estimates of true values. Measurement uncertainty may be defined as the probability that a reading will fall in the interval that contains the true value.

Uncertainty statements consist of two parts, an error value and a corresponding level of confidence. An uncertainty statement such as 0.1 units, 2 sigma confidence level means that the author of the statement is 95% sure that the true value is not different than the measured reading by more than 0.1 units. The error value determines the range or interval that contains the true value. The error in this example is 0.1 units. If the nominal value is 2.0 units, the confidence interval is 1.9 to 2.1 units. This interval may also be referred to as the 95% confidence interval. Uncertainty values are also time dependent. If the confidence level is held constant, the magnitude of the error tends to increase with time. Uncertainty statements given for M&TE readings, unless otherwise stated, apply to the end period of the manufacturers recommended calibration interval.

 

7.0 DETERMINING INSTRUMENT CAPABILITY

By quantifying and combining possible sources of error throughout the chain of traceability, it is possible to estimate and compare instrument capability with test requirements. The uncertainty of a reading should be small enough to have a negligible effect on the result of the measurement. A common method is to calculate the test uncertainty ratio (TUR), sometimes referred to as the test accuracy ratio (TAR). The uncertainty of the standard(s) used for calibration is combined with instrument uncertainty in order to derive a total measurement uncertainty statement for instrument output. When an instrument is used to make a measurement, the total uncertainty, at a specified confidence level, is compared to the allowable tolerance of the measured value in order to obtain a TUR. NIST recommends a confidence level of 2 sigma or 95%. TURs are used as a measure of instrument capability. Current industry practices usually accept ratios of 4 to1 or greater, however 10 to 1 or greater is desirable.

 

Example 1

A company has set internal goals for achieving at least 4 to 1 test uncertainty ratios for all calibration tests.

The tolerance of a caliper is 0.1 inches (TMV = 0.1). The uncertainty of the gauge blocks used to test the caliper is 0 .02 inches at a 95% confidence level (UTS = 0.02).

The TUR of 5 represents a 5 to 1 ratio. This ratio is above the acceptable limit set by the company. The gauge blocks chosen for this test have been deemed as acceptable for testing calipers.

 

8.0 MEETING TEST SPECIFICATIONS

Measurement acceptability is determined by comparing values of the unit under test (UUT) to standard values. A nominal value from the UUT is compared to an actual value determined from the standard. The measurement passes if the difference between the two values is less than that specified by the tolerance. If the difference is greater than the tolerance, then the measurement fails and some type of corrective action occurs such as adjustment, repair, or replacement of the UUT. After corrections have been made, testing is again required to determine if the corrections were effective. When testing a measurement device, several values throughout the range of operability or use are usually chosen to be tested. Single checkpoints can also be specified depending on the application. If any one of the specified test values fails to meet it’s tolerance requirements, the device is usually considered to have failed the calibration requirements.

 

9.0 TEST PROCEDURES

There are many methods that can be used to verify or calibrate measuring and test equipment. Some became industry standards due to widespread or common use, however, there are no sets of law that determine the methods to be used. There are accepted practices for certain types of measurements (industry standards) and deviation from these methods might invoke question and/or suspicion from customers or auditors. In general, any logical, reproducible, documented process that uses proven standards is deemed to be acceptable. These documents can be in the form of published industry practices, books, manufacturer’s manuals, and internally or externally produced layouts. Software programs, on-line procedures, and hard copy media are acceptable.

Typical calibration procedures include information on calibration intervals, the standards to be used, the setup, the methodology, the measurements to be taken, the allowable tolerances, the data to be recorded, and corrective action such as repair and adjustment procedures.

 

Example 2

The essential aspects of a documented calibration procedure.

wpe1.jpg (5534 bytes)     wpe2.jpg (4819 bytes)     wpe4.jpg (7194 bytes)  

wpe7.jpg (7632 bytes)     wpe8.jpg (7254 bytes)    wpe9.jpg (6520 bytes)

 

10.0 RELIABILITY AND PERFORMANCE METRICS

Measurement data can be used to determine various characteristics about equipment, methods and processes, as well as test acceptance. Measurement reliability is the probability that all measurement attributes of a piece of equipment are in conformance with performance specifications and will be so for a specified period. Systematic errors tend to increase with time, which leads to the growth of measurement uncertainty. To set a 95% reliability target means that test intervals have been defined so that 95% of equipment tested on that interval is found to be within tolerance. Reliability targets are used to determine calibration intervals that coincide with cost and quality requirements.

Measurement variability is commonly expressed in terms of stability, bias, linearity, repeatability and reproducibility (see glossary). The terms stability, bias and linearity are used to describe the location of measurement values in relation to stated values. The terms repeatability and reproducibility are used to describe the width or spread of measurements in relation to each other. Measurement variance is the basis for quantifying overall performance characteristics of measurement systems and allowing for comparisons between test methods and equipment. Measurement reliability and variability metrics can be used to determine the most efficient process available for testing or obtaining a measurement.

 

11.0 GLOSSARY OF TERMS