Webinar Q&A follow-up: the origin-adjusted approach for biomarker quantification by LC–MS
Thank you to everyone who attended our webinar: The origin-adjusted approach for biomarker quantification by LC–MS. Below are Robert’s responses to the questions posed during the live event. We hope this is a useful resource and thank our webinar attendees and our speaker, Robert MacNeill, for their presentation.
Robert MacNeill
Director of Bioanalysis
Pharmaron (PA, USA)
Robert is a Director of Bioanalysis at Pharmaron in Exton, and has a background dominated by quantitative bioanalytical LC–MS, particularly method development, leaning toward novel techniques and approaches. The regulated bioanalytical area within the contract research domain is where Robert has applied his skills for decades.
Webinar Q&A follow-up
Is this approach the same as standard addition?
The origin-adjusted approach shares the essence of classical standard addition, but it is not the same because it includes an extra dimension. This added arm removes the endogenous level from the calibration by translating the line to intercept the origin, enabling the interpolation of sample concentrations through the line with the ease, throughput and reliability of a pharmacokinetic assay.
How concerning is the semi-quantitative region below the lowest calibrant signal?
The region in question where the analyte peak areas are below those of the lowest calibrant may appear at first glance to be concerning, since definition is innately lacking. However, it is not a concern. The line produced extrapolates from the lowest calibrant level to the origin in exactly the same proven and reliable manner in which standard addition lines extrapolate to show the endogenous level. The outcome has the same reliability. Additionally, the origin may be seen as an anchor point, reliability underlined by the definition of the calibration, bridging this area of slight uncertainty. Furthermore, we do not expect any significant abundance of sample levels in this area if the biomarker does not typically descend from its characterized window into the stated region to any marked extent.
Does this affect our choice of internal standards?
It does not. The choice of internal standard is a different kettle of fish, but we can at least say that the choice we have is wider than in the popular alternative of the Surrogate Analyte approach. In the latter, two analyte analogs must be selected and they must not show interference with each other, and they’re normally a pair of isotopologues. In this approach, only one analogue is required and plays the role of the internal standard.
Would the sheer magnitude of the endogenous level have an impact on the reliability of this method?
Only if you are limited in the construction of an adequate line — maintaining linear response — above the endogenous level in question. Include sufficient dynamic range above this level, with enough replicates, if necessary, to give the definition in order to perform the innate extrapolation to the endogenous and then of course the translation of the line to the origin.
Do real samples suffer from the additional variability most visible in the lowest calibrants?
Real samples do not suffer this additional variability. It is only calibrant samples or QC samples that have nominally spiked analyte on top of endogenous analyte that has the double-dose of variability, most visible at the low end of the line but which does not compromise the standard addition outcome.
What is the recommended minimum number of multiples above the endogenous level that should be added using standard addition?
That’s an interesting question. The more the better is the simplistic answer, as long as linearity is maintained. I think along the lines of a typical LC–MS assay dynamic range of 500- to 1000-fold is adequate, as shown experimentally. It might be interesting, in circumstances where time and cost are deemed a priority to minimize, to investigate truncating to assess changes in the calculated endogenous, hence the effect on the reliability as the range is pulled down.
Have you used this approach in the lab or to validate any methods yet?
Yes indeed. For examples, see the open-access 2022 publication in ACS Omega.
For standard addition, how many sampled data points and replicates should be established?
I think enough levels with at least n=2 to describe a typical 500- or 1000-fold calibration line (like normal LC–MS quantitative methods) above endogenous is sufficient. Checking linearity all the way is important. For the lowest nominal over-spiked, it may give more confidence in the definition to have n=3 but the remainder of the line has an immense contribution and gives that solid anchor point which represents the reliability of standard addition.