Webinar Q&A transcription and follow-up: ‘High-throughput strategies for ADME bioanalysis’
Thank you everyone who attended the live webinar ‘High-throughput strategies for ADME bioanalysis’. Below is a transcription of the Q&A session held during the webinar, as well as responses to the questions posed during the live event that we did not have time to answer. We hope this is a useful resource and thank our webinar attendees and our speaker, Wilson Shou (Bristol-Myers Squibb), for their time.
1. What kind of compounds would you use for LC-ESI, and what compounds for LDTD-APCI (Laser Diode Thermal Desorption-Atmospheric Pressure Chemical Ionisation)?
Compounds in general respond differently to electrospray and APCI (because LDTD-APCI is basically APCI). So I would generalize by saying that for a generic applicability perspective, like when you need to run or develop methods for a wide range of compounds (small molecules that is), I think electrospray is probably more widely applicable. However, APCI does have its advantages in terms of more resistance to matrix effects and also better linearity in some cases than electrospray. So it depends on the particular analyte you would be dealing with.
2. What are other non-routine samples?
If you talk about non-routine sample in terms of bioanalysis, you have plasma, sample, and urine in general; but there are tissues, which is obviously a field of study by itself and could probably warrant its own presentation. There’s also very low level or low volume sample, like CSF, and also all sorts of different organs and fluids. Those are typically outside the scope of in vitro work that we typically do.
3. With the online SPE-MS/MS, how do you deal with carry-over issues that are much more frequent on such systems?
That’s a very good question. All those online SPE systems, including RapidFire as well as the ADDA systems involve multiple switcher valves, and the compounds could basically get stuck in the switcher valves, in the injection needle, on the column, and so forth; so there’s definitely more chances of having a carry-over problem using an online SPE approach.
So to address that there are a number of methods you can employ. One thing is aggressive washing. We can on a RapidFire system wash by an organic as well as an aqueous washing solvent, so you can optimize the composition of those washing solvents. A second point is, on RapidFire, often when we have a carry-over issue for a particular analyte we would actually make a blank injection, after a full length injection. For instance if the full length injection is 10 sections; we would make a blank injection with switching everything for 2–3 sections. So the overall cycle time is still reasonably short and acceptable at 12–13 sections, but you get extra an injection of blank to wash the valves and wash the columns. And a third approach, that we usually take, especially for in vitro samples, and to some extent also for in vivo samples, is that you can prearrange the samples, because you know the time course in terms of PK sample for instance, and also you know for in vitro samples for instance the IC50 curve, with different inhibitor concentrations. So in that case you can rearrange your samples to inject those samples where expected low concentration will be present first, then progressively into higher concentrations. So in between any two given injections in that case the impact of carry-over will be very much minimized.
4. How exactly is the sample transfer done before direct analysis by LDTD?
That in itself probably could be another presentation. It’s a very neat thing that our automation and assay colleagues do for us. What they use is a piece of instrument that uses acoustic waves to eject droplets from the sample well to another receiving plate, sitting on top each other. Each acoustic wave ejects about 2.5 nL of liquid, so in order to do 250 nL we have to do that 100 times, but that is done in an extremely fast fashion. Even 384-well plates can be done in about 2–3 minutes. So a big advantage of that is that it’s really quick, as it’s ready in 2 minutes, and it’s contactless meaning you really don’t need tips, which is a big cost saving.
That being said, sometimes the assay procedure needs to be modified a little because the acoustic waves approach does not work very well with high concentrations of organics. So you need to limit the organic composition of your sample, for instance by going with an acidic protein precipitation approach, to ensure that organic concentration in the sample is low enough to be compatible with acoustic transfer approach. The acoustic equipment we use is called Echo® by Labcyte.
1. What high-throughput strategies would you use for clinical samples assay development and routine analysis, like serum or urine, considering they all have a high matrix effect and are much dirtier than microsomal or cell based incubation?
Sample preparation would be the key here. If there are lots of samples and the LC-MS/MS analysis time is the rate-limiting step, offline, parallel sample preparation using PPE, SPE, LLE, SLE and SALLE would be useful. If there are not many samples, then an integrated online extraction system from Thermo (Cohesive) or Spark could be applied.
2. What about thermolabile compounds with LDTD?
LDTD used APCI for ionization, so thermally labile compounds would not be compatible unless approaches such as derivatization are applied prior to analysis.
3. When is accurate mass (e.g., Q-TOF or triple-TOF) required for quantitation peptides which may have gone through some deamidation, etc. Would QqQ be sufficient? What about sensitivity loss from QqQ to Q-TOF?
If the deamidated products are well characterized, then QqQ with SRM could be applied. Full scan does offer the advantage of showing the entire picture, with the caveat of less sensitivity. So maybe a reasonable compromise would be to do method development with Q-TOF, and sample analysis with QqQ?
4. Can sample reduction strategies be used with HRMS systems?
Certainly. HRMS offers a big advantage in terms of sample pooling, as in theory an unlimited number of analytes can be multiplexed.
5. Do you use the RapidFire with an Sciex mass spec? If so, how do you upload transition data from Discovery Quant Optimize?
Yes we do. You can create “MS Only” methods in DQ Analyze, to be used by RapidFire. Please consult Sciex application support and I’m sure they will walk you through it.
6. Are there any other pros and cons of QqQ vs. Q-trap for quantitation, other than faster scan by Q-trap?
When we use our Q-Trap, it is typically for full scan applications such as SRM MS/MS optimization. When we do sample analysis, the Q-Trap is actually operated under the QqQ mode.
7. Very exciting topic! How do you handle clinical samples with an average concentration down to 10-100 pM level in serum or urine with a limited sample volume using the high-throughput strategy?
Probably go back to refine sample preparation again. In this case, offline sample preparation needs to provide sample concentration as well. Another approach would be to use online sample preparation such as the Thermo/Cohesive Aria TLX system, whereby you can load a relatively large sample volume onto the extraction column, flush away salts/proteins, then back flush onto the analytical column. This approach is widely used in diagnostic labs for clinical sample analysis for things like steroids.
8. Have you had any strategies to limit compounds to be tested for ADME in BMS?
We use a tiered approach. We require compounds be submitted to 1st tier, very high-throughput assays like Metstab first, then if needed, be submitted to higher-tier, lower throughput assays such as metabolic soft spot ID for follow-up studies.
9. How can you avoid mis-assignment of the peak to the sample in fast serial analysis?
We use internal standard (which typically coelutes with analytes) to “mark” the retention time. During data processing, DQ would find the IS peak first, then peak peak integration for the analyte.
10. How do you control the multiple column performance on the parallel LC-MS system?
It is not unusual to have column-to-column difference. Using our 2x system, we would inject all replicate 1’s to column 1, and all replicate 2’s into column 2. Therefore the biological calculation is always performed on results obtained from one column, to eliminate variability between columns.