MKT | DOCS | Blog Posts | MV09 Method Comparison v01

MKT | DOCS | Blog Posts | MV09 Method Comparison v01

Before any patient result reaches a clinician, a laboratory must ensure the data released are accurate and reliable. Even minor inaccuracies can potentially alter a clinical diagnosis, therefore verifying the accuracy of a method is critical before being implemented for routine patient testing.

What is Accuracy?

Accuracy is defined as the closeness of results between the new method and an already established reference or true value. Imagine accuracy like a dartboard with the bullseye being the target value. The closer the dart lands to the bullseye, the more accurate your results are!

image

Quantifiable differences between the target value and the test result are called Systematic Errors or Bias. These errors must be fully defined and corrected through a Method Comparison experiment before a new method can be reliable enough for laboratory use.

💡
What are Systematic Errors and Bias? Systematic Error is defined as the constant deviation from the true value. In simpler terms, this type of error can be seen all throughout results. While the quantifiable deviations that arise from these errors are called Bias.

Common sources for Systematic Errors include:

  • Reagent Issues
  • Calibration Errors
  • Instrumental Errors

For instance, if Glucose being measured is at 100 mg/dL but is being consistently read at 120mg/dL due to a degraded reagent being used. This is a systematic error with 20mg/dL bias.

🧪 Method Comparison Experiment

As a core part of Method Verification, Method Comparison evaluates a critical performance characteristic - accuracy.

Method Comparison Experiment Design

Following Clinical and Laboratory Standards Institute (CLSI) EP09-A3 recommendations, the minimum experiment design includes the following:

  • Number of Samples: 40 samples
  • Sample Range: Must span the working range to include lower and upper limits
  • Days of Testing: 5 days
  • Replicates: Not necessary but recommended to ensure random errors are identified

5-Step Method Comparison Experiment

Step 1 - Define the Experiment Plan

Before running any samples, clearly define the experiment plan and objectives:

  • Test Method: the new method to be verified
  • Comparative Method: an already established method whose results will be used as the true value
  • Total Allowable Error: defines the maximum amount of error between the two results the laboratory is willing to tolerate

Step 2 - Select and Prepare Samples

Following CLSI EP09, prepare at least 40 patient samples that represent the full analytical measuring range of the test. Use fresh quality specimens whenever possible.

Step 3 - Conduct the Experiment and Document Results

Each of the selected samples should be analyzed once on both methods under identical routine conditions to ensure comparability of results:

  • Begin by testing each sample using the comparative method.
  • Analyze the same samples on the new method- ideally within two hours to minimize potential degradation or instability.

Record test output from both methods on a side-by-side comparison table (See Table 1). This structured format allows clearer visualization and simplifies the calculations for the next steps.

image

Table 1: Method Comparison Experiment from Cualia

Step 4 - Assess the Performance

According to Westgard, evaluating the performance of a method ultimately means quantifying the errors present. In a method comparison study, two key parameters are evaluated to assess the errors present:

  • Bias: quantifies systematic errors present
  • Error Index (Ei): ratio of the observed error (bias) to the total allowable error of an analyte

Calculate the bias and Ei for each pair using the formula:

Bias %:

image

Error Index (Ei):

image

Step 5: Judge Acceptability of New Method

✅ The accuracy of the new method is considered acceptable if the Error Index is below 1.0 for 95% of the samples. For instance, if a total of 40 samples were tested, 38 samples should have an Ei that falls below 1.0.

🧪 Method Comparison for Qualitative Methods

Verification for qualitative methods are formally conducted through a Clinical Agreement Study.

Clinical Agreement Study follows the same experiment design and procedure for a classic quantitative method comparison experiment. The key difference lies behind the type of results and the assessment of acceptability.

Following CLSI EP12 recommendations, a Clinical Agreement Study requires 40 samples equally divided between 20 known positives and 20 known negatives.

Each sample is tested on both methods and test results are plotted on a Contingency Table where a side-by-side comparison is done for each sample to categorize these based on results:

Result
Comparative Method Result
New Method Result
True Positive (TP)
Positive
Positive
True Negative (TN)
Negative
Negative
False Positive (FP)
Negative
Positive
False Negative (FN)
Positive
Negative

📏 Accuracy of the method is then calculated through the formula:

image

✅ A qualitative method is considered as accurate if calculated accuracy is exactly or falls above 95%.

📈 Alternate Method Comparison Experiment

While the classic method comparison approach is commonly used for verification because of its simpler and faster nature, CLSI EP09 also outlines an alternative approach to verifying accuracy through linear regression analysis.
💡
Linear Regression is a statistical method that evaluates the relationship between two methods. Values are plotted on a scatter plot where a regression line represents the best-fit relationship.

MC Linear Regression Experiment Procedure

Step 1 - Define the Experimental Plan

Similar to a classic method comparison, begin by developing a plan following the minimum experimental design recommended in CLSI EP09. Overall, this approach follows the same experimental design as the classic method comparison.

Step 2 - Conduct the Experiment and Plot Values

Analyze the samples once on both methods following routine and similar conditions. Plot the values on a scatter plot where:

  • X-axis (horizontal): result obtained from the comparative method
  • Y-axis (vertical) - result obtained from the new method

From the scatter plot, a visual assessment of outliers can already be performed. When the plotted values fall close to the linear line, the two methods show strong agreement. But when the plotted values consistently deviate above or below the line, this indicates systematic errors are present in one of the methods.

image

Figure I. Scatter Plot obtained from Cualia Method Comparison Experiment

Step 3 - Calculate Regression Parameters

Compute for regression parameters using the formula below:

image

where:

m = slope

b = intercept

y = test concentration

x = expected concentration

Step 4 - Determine Experiment Acceptability

Each regression parameter is compared against target values to determine acceptability:

Parameter
Purpose
Ideal Value
Indication
Slope (m)
Quantifies proportional systematic error
1.0
<1.0 indicates the new method yields lower results at higher levels >1.0 indicates the new method produces higher results as concentration increases
Intercept (b)
Quantifies constant systematic error
0.0
<0 indicates one method consistently reports lower results >0 indicates the method consistently provides higher readings
Correlation Coefficient (r)
Linear relationship between two methods
≥0.95
≥0.95 indicates strong agreement between the two methods <0.95 suggests poor agreement

✅ A method is considered accurate when the correlation coefficient (r) is ≥0.95 implying a strong linear agreement between the two methods.

🧪
Imagine cutting hours of setup and calculations down to minutes.

Try out Cualia and experience the power of:

📏 Flawless, automated classic and regression calculations

🛠️ Custom tables generated based on your preference

✅Experiment acceptability determined automatically for immediate verification insights

Visit Cualia to start streamlining your Method Comparison Experiment!

References

Clinical and Laboratory Standards Institute. (2013). Measurement Procedure Comparison and Bias Estimation Using Patient Samples; Approved Guideline—Third Edition. (CLSI document EP09-A3). Clinical and Laboratory Standards Institute.

Westgard, James. (2020). Basic Method Validation, 4th Edition. Wisconsin, Westgard QC, Inc.

Westgard, James. (2008). The Method Comparison Experiment. Basic Method Validation. Retrieved from: https://www.westgard.com/lesson22.htm

Jensen, A.L., Kjelgaard-Hansen, M. (2008). Veterinary Clinical Pathology, 3rd edition, Vol. 35. The American Society for Veterinary Clinical Pathology. https://doi.org/10.1111/j.1939-165X.2006.tb00131.x

Napte, B. (2020). How to Perform Accuracy During Method Validation?. Retrieved from: https://www.linkedin.com/pulse/how-perform-accuracy-during-method-validation-bhaskar-napte/

Ungerer, J. P. J., & Pretorius, C. J. (2017). Method comparison – a practical approach based on error identification. Clinical Chemistry and Laboratory Medicine (CCLM)56 (1), 1–4. https://doi.org/10.1515/cclm-2017-0842