HOW TO QUALIFY MEASUREMENT PROCESSES

How do you calculate a measurement uncertainty and which uncertainty is acceptable?

22 May 2015: Edgar Dietrich

A measurement result always incurs an uncertainty of measurement. However, how do you calculate this uncertainty? And which uncertainty is acceptable for which measuring task? Due to various influences, it is quite unlikely that the value displayed in a measurement is absolutely correct. In order to find the range in which the true value is expected, you calculate the extended measurement uncertainty. The measurement result is always the measured value plus or minus the extended measurement uncertainty.

How do you calculate the extended measurement uncertainty and is the calculated uncertainty acceptable for the respective measuring task? During the last decades, several standards, association and company guidelines were created whose main purpose was to provide answers to these two questions. They refer to terms like measurement process qualification, gage capability, measurement system analysis, capability of measurement processes or calculation of extended measurement uncertainty. Why are there so many different procedures and not just a single method to cover this topic? We have to go back in the past to see how these documents have developed in the course of time.

History of measurement process qualification

When SPC (statistical process control) was introduced into production in the middle of the 90s, this new type of control only involved self-inspection performed by operators. Nowadays, self-inspection goes without saying. In the time before the introduction of self-inspection, the “quality“ department had been responsible for “quality inspections“ – this department was dissolved back then. However, companies placed the responsibility for the quality of the produced product characteristics on operators.

In order that operators were able to take these measurements, as and when required, immediately, they were provided with measuring instruments at SPC or measuring stations close to the respective machines. Hardly anybody wondered if the respective measuring instruments were able to carry out the measuring task with reasonable accuracy. However, many people involved quickly realized that the deviations and variations of measured values were caused by the manufacturing process and even by the measurement process. The variation caused by the measurement process often even exceeded the variation caused by the manufacturing process.

At least by now, the time to deal with “qualification of measurement processes“ has come. This happened almost simultaneously in many quarters at the end of the 90s. ISO standards, manuals of the AIAG (Automotive Industry Action Group) of VDA (German Association of the Automotive Industry) and various company guidelines were created. Basically, there are two different approaches depending on the procedure and way of thinking.

Guidelines and GUM

At first, the guidelines of the automotive industry – there was no GUM (Guide to the Expression of Uncertainty in Measurement) at that time – focused on capability analysis of measurement systems. These guidelines thus provided answers to the question whether the measurement system was capable of performing the respective measuring task. [...]

From GUM to VDA’s capability of measurement processes

The first standard explaining how to calculate the extended measurement uncertainty was GUM published in 1995. On the one hand it was the measure of all things but it was hard to apply in production due to highly complex measuring tasks. However, this was not the case in measuring rooms. Still today, all calibration laboratories have to evaluate extended measurement uncertainty according to GUM for ISO/IEC 17025 accreditation. Laboratories must apply the principles of GUM to all units involved in the calibration of measuring instruments. [...]

Differences between MSA, company guidelines and GUM

MSA focuses on the calculation of the GRR-value (gage repeatability and reproducibility) examining repeatability (equipment variation) and reproducibility (appraiser variation) of the measurement system under real conditions. These aspects are certainly the main influences leading to an increased uncertainty in most cases. Company guidelines often refer to the procedure determining the GRR value as type-2 study. There are, however, even further influences affecting the measurement result, such as calibration uncertainty, deviation from linearity, form deviation, temperature fluctuations or different temperatures of test part and measuring instrument...


Similar articles