close-icon
Subscribe to learn more about this topic
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Dealing with Systematic Error in LC-MS

Identifying the causes of inaccurate measurements

datarevenue-icon
by
Markus Schmitt

LC–MS is becoming routine but using data that LC-MS instruments produce is still challenging. An LC-MS is like a symphony of physics, working together in harmony, but everything needs to be in tune. Unfortunately, this means errors in multiple places are possible. 

In this article, we’ll focus on one source of error in large LC-MS data batches: systematic measurement bias. We’ll explain how you can address biases through careful controls and normalization.

Compounding errors in LC-MS

In an error-free world, you would simply measure the metabolites for each patient, then look for the differences. But in reality, each stage in your data-gathering process can introduce small errors. This risks making data incomparable

These errors might look similar to the biological variance you’re looking for: they can trick you into believing that true biological differences are what you’re seeing. Separating experimental error from interesting biological variance is difficult. It’s always better to minimize sources of error than attempt to correct large errors, although it’s not always possible. 

A graph showing different sources of variance feeding into the same dataset
Errors and biological variance get mixed up in the same dataset

Errors can appear at every step from sample to spectrum

Let’s see how quickly errors can creep in. Here’s an overview of how errors can occur in dried blood samples analyzed with LC-MS:

A diagram showing how samples go from collection to storage to preparation to LC-MS to analysis.
The five stages of data gathering

Potential errors from collection

You put a small drop of a patient’s blood on a special sheet of paper. Then you leave it to dry for several hours.

  • The patient adds a drop of blood that is too large or too small (human error).
  • The patient is overhydrated. This affects the sample concentration (environment error).
  • The patient has higher concentrations of certain compounds. But these don’t indicate any problems (irrelevant biological variation).

Potential errors with transport and storage

You can store and transport blood samples without special equipment. But storage time and temperature can still affect the samples.

  • You need to transport some samples 100km. But your vehicle isn’t climate controlled (environment error). 
  • You collect some samples in Brazil and others in Finland. But the different temperature, humidity, and other conditions affect the samples (environment error).
  • Some samples are only one week old; others are three years old. This can change the concentrations of less stable metabolites (environment error).

Potential errors from sample preparation

In the lab, you punch the samples out of the paper and extract them with solvent.

  • You add too much solvent. This reduces the concentration of the abundances you’re measuring (human error).
  • You use some samples prepared in a different lab, where protocols differ (human error).
  • The sample size you punched from the paper is larger in one lab than another (equipment error).

Liquid Chromatography – Mass Spectrometry 

You batch multiple liquid samples into “plates” and analyze them with LC-MS. This multi-step process involves chromatographic separation and mass spectrometric analysis. 

First, you separate compounds from the blood serum by passing them through a column packed with a stationary material. Each compound interacts with the stationary material differently. This creates small differences in the time for them to pass through. As the liquid sample exits the column, it’s transferred to the ionization source of the MS instrument for mass analysis. 

The liquid chromatography gives a retention time measurement. And the mass spectrometer gives a mass/charge ratio (m/z) for each compound. Under certain conditions, you can use the intensity of the MS signal to determine the abundance of the compound.

Potential errors from LC-MS analysis

  • Chromatographic columns degrade over time. So retention time of metabolites can shift (equipment error). 
  • The ionization source and instrument get dirty. This causes an increase in background noise compared to your analytes of interest (equipment error).

Potential errors from data analysis

You process the raw data: You pick, deconvolute, integrate, and align peaks in the mass spectra. Then you create a spreadsheet of peaks and abundances to focus on. 

  • You use the wrong parameters for preprocessing and end up with bad data (human error).
  • The alignment and peak-picking algorithms mischaracterize peaks. This introduces bad measurements (equipment error).

The difference between bias and noise

Some of these errors are systematic errors, also called “bias”. And you can counteract them. For example, if you know the storage method of a batch of samples has reduced a specific compound’s abundance, you might adjust your error margins for that compound to compensate. 

Bias is distinct from random error and “noise” as it applies to all samples, not to any one in particular. Importantly, systematic error is reproducible and does not average out – but can be corrected. “Random error” is truly random, and can’t be predicted. This also means that it can’t be fully eliminated. Often, natural instrument variation like electronic noise can cause random error. 

Types of equipment-related variance

Instruments commonly introduce bias. For example, LC-MS might over- or under-measure ion abundances. If it does this consistently, you have to identify the bias, then adjust each measurement or change your analysis. 

In a simple case, an instrument might add a consistent error to each measurement. This is often instrument-specific. You address it by comparison with QC samples. 

Two graphs showing ground truth on the left and an added bias component on the right.
Simple bias introduces a consistent error to each measurement

In a more complex case, the equipment might over- or under-measure by an increasing amount with each measurement. This is known as drift.

Two graphs showing ground truth on the left and with a drift component on the right.
Drift introduces a growing error to each measurement

All LC-MS equipment introduces some level of drift. Sometimes this significantly skews the results: the abundance correlates strongly with your positioning of the samples on the plate. Even without considering more nuanced effects – like sample evaporation – drift can affect comparison within a batch. 

To make things worse, different compounds within the sample may be affected in different ways. 

Identifying and counteracting drift using reference samples

To remove bias, drift, and other systematic errors introduced by equipment, you first have to identify them. You can create reference samples to identify these errors. One way is to aliquot and pool all samples to create a master mix. Then, you take multiple measurements of the reference sample, but with different positions in the sample queue. 

Because you’re measuring the same sample, you expect to see the same abundances. A change in abundances related to the position in the queue indicates drift. If so, you can calculate exactly what effect the position has on each compound. 

Randomization of samples within the queue can be a powerful tool to ensure any drift or bias doesn’t also correlate with changes in phenotype or cohort. If you introduce multiple blank samples, you can also ensure you’re not analyzing false peaks or experiencing sample carryover – these could confound measurements. 

An extension of this concept is to run every sample multiple times in a randomized order. This is time consuming. But it lets you compare every sample for drift to itself – not just the reference samples and blanks. A major advantage is that if you only see ions in one MS run, they’re probably noise, so you can remove them from the analysis. This can massively reduce dataset size. And it can build confidence that a detected ion is real. 

There are other causes of systematic error in LC-MS which are instrument dependent, such as drifting mass accuracy. If the mass accuracy suffers, any applied integration will give the wrong values. Some instruments have referencing for mass accuracy drift, such as lock-mass correction found on many QToF instruments. But even these methods can fall short and post-processing normalization may still be necessary. 

The importance of normalization 

The best way to prevent error is to minimize it through careful experiment design. But this isn’t always possible – error will always exist in any measurement. 

You can think broadly of the data having two components: (i) true variance. This is the “real” biological differences between groups of samples, which is what you’re looking for. And (ii) apparent variance. This is the non-biological differences that external factors introduce into the data.

Normalization is the process of fixing bias – and  it’s not specific to LC-MS. You can use normalization to remove the apparent variance, leaving behind the true biological variance. You can find interesting insights more accurately when you’ve minimized most of the errors.

A diagram showing how apparent variance and true variance are combined and then separated again using normalization.
Normalization removes the apparent variance, leaving behind true variance.

The risks of not normalizing data properly

Without a good normalization process, you run two significant risks.

False positives: Finding biological difference where none exists

It might seem you’ve found a significant biological difference between two groups where none exists. For example, healthy patients gave samples on a warmer day than diseased patients. You assume the diseased patients are easily identified with biological differences. But really you’ve just identified how warm weather can affect sample integrity. 

False negatives: Missing biological difference

You might miss a significant biological difference that does exist. For example, the healthy patients showed a higher concentration of a specific molecule. But their samples were also more heavily diluted prior to analysis, so the difference isn’t visible in the final dataset.

Normalizing bias with post-processing

You can use different normalization algorithms to correct for drift; they work in different ways. One popular method divides each peak intensity by the mean of all peak intensities in the same spectrum. You have to do this carefully: If the normalization is too heavy-handed, you might destroy the biological variance you’re looking for. Biological data, such as antimicrobial or cytotoxicity assays, also has its own sources of error and often needs its own normalization. 

[A future article will compare different normalization algorithms in depth. Sign up for our newsletter to get a notification.]

Are you working on LC-MS engineering problems?

Normalization and data processing can be tricky concepts to wrap your head around. If you’re concerned about your data and how to address systematic errors in your analysis, we’d be happy to help.

Get Notified of New Articles

Leave your email to get our weekly newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.