*Posted by Sten Westgard, MS*

One of our members sent in a question about bias. This time, for once, it wasn't about finding the right source for bias (such as a peer group mean, or a reference method mean, etc.). This was much more basic: how do we calculate bias?

If we express the difference between the target mean and the observed lab mean as a bias, what is the denominator? Is it the target mean or the lab mean?

Perhaps it's helpful if we get more specific

Target mean |
Observed mean |
Unit Bias |
Bias% of Target |
Bias% of Observed |

100 |
150 |
50 |
50% |
33.3% |

100 |
200 |
100 |
100% |
50% |

100 |
250 |
150 |
150% |
60% |

100 |
300 |
200 |
200% |
66.6% |

100 |
350 |
250 |
250% |
71.4% |

You can see that as the observed mean gets farther away from the target mean, the difference between the bias% using the target mean in the denominator gets larger and larger from the bias% using the observed mean in the denominator.

So in practice, we prefer to use the target mean as the denominator in the calculation of bias%. Since that's what we're comparing against, we're making a presumption, at least a small one, that this is the value we should be hitting, so any bias is in context, reference to, that target.

In practice, we also rarely see biases get so large. For most of the analytes we deal with, a bias of even 30% becomes untenable, regardless of how it's calculated. When we have very small biases, the differences in calculated bias% from target and observed mean are often insignificant.

Nevertheless, this phenomenon has led some to recommend keeping the bias in units and to make any calculations for Sigma-metrics based on units, not percentage based calculations. When carefully handled, however, the calculations come out the same.

Keep those questions coming!