End-to-end innovation in a regulated environment: applying spectral flow cytometry to clinical sample testing

Written by Ruth Barnard (GSK)

In this Editorial from Ruth Barnard (GSK; London, UK), she discusses the use and application of spectral flow cytometry to support clinical trial sample analysis, focusing on harmonization, standardization and automation.

Ruth Barnard is a flow cytometrist with nearly 20 years’ experience in the field, and has seen huge advances over her career. Starting out in high-throughput screening, followed by translational biology and biomarker discovery roles, she is currently working in the clinical space leading a team of experienced flow cytometrists and immunologists in a regulated laboratory. Here they design, optimize, validate and execute high dimensional flow cytometry panels for GSK clinical trials. Ruth believes that a passion and interest in the next big cytometry advance does not necessarily need to be compromised by working under GCP regulations, although it certainly presents us with more than its fair share of extra challenges! Embedding spectral cytometry into a regulated environment has been Ruth’s primary focus over the last 4 years of her career; coupling that exciting, agile technology with requirements specific to generating clinical trial data has been a challenging but rewarding experience.


Flow cytometry (FC) is nothing new to most immunologists, or indeed any scientist that has been tasked with measuring characteristics of single cells. Born in the 1960s, FC has grown and matured over the last 60 years into a known and loved technique that is equally at home in both academic and hospital laboratories, as well as those of biotech and pharmaceutical companies. With increasingly complex targets being prosecuted by the pharmaceutical industry using increasingly complicated modalities, a platform that can inform the user on multiple characteristics of individual cells from a small heterogeneous sample is extremely powerful. This central property of the technology means that FC easily lends itself to biomarker discovery, enabling investigation of immune signatures of disease and/or response to treatment. In essence it is the perfect tool to go fishing with. Do you need to look at multiple cell types? Do you need to measure multiple activation/exhaustion markers? Do you need to understand proliferation status? Are absolute cell numbers or ratios of one cell type to another required? Is linking of receptor occupancy levels to functional outputs critical for dose prediction? FC can generate all this data and more from a single sample, and multiple markers expressed on multiple parent and child populations can now be measured in one large panel thanks to the relatively recent commercialization of spectral cytometry. Countless numbers of articles and book chapters are available to newcomers in the field as to how to conduct all of the above types of assays, as well as on the theory of spectral cytometry, so neither of these points will be covered in any depth here, but rather the use and application of spectral cytometry to support clinical trial sample analysis will be discussed.

When I began my career, routine FC panels hovered around the six-color mark. Now, in our Good Clinical Practice (GCP) compliant laboratory at GSK (Stevenage, UK) we are in the process of validating a 37-color panel ready to deploy across multiple clinical trials. Others in the field have pushed even further [1], meaning that the four bivariate plots needed to illustrate data at the turn of the century have now become 40, the reportable output list has expanded from eight to easily 400. This is all very exciting from the biomarker strategist’s point of view (let’s look at everything!), but how do we manage 400 data points from each of 300 subjects across 16 timepoints on a clinical trial? This is a tremendous amount of data to extract, reconcile, digest, interpret and use to make decisions. But more on that specific issue later – how do we even get to that point in a regulated laboratory in the first place?

Designing and validating a spectral FC panel in the GCP laboratory is 90% preparation. What are the data going to be used for, i.e. the context-of-use needs to be well defined [2]. Is it classed as a primary, secondary or exploratory endpoint? What is the critical biological question to be answered? Do bespoke reagents need to be generated? Is it a single or multi-center trial? The resultant increase in the complexity of logistics for multi-center trials can really influence the matrix of choice, the reagents chosen, the scope of the method development and the focus of the validation. Does the panel need to be outsourced either fully or in-part? Which vendors have the correct instrumentation and global presence? This all needs to be discussed and (mostly) decided upon before a scientist has even picked up a pipette.

In parallel to these discussions, panel design will be initiated, and this will be based on published OMIPs (Optimised Multi-Colour Immunofluorescence Panel) [3], other literature relevant to the disease or asset, prior experience and knowledge. Once the first iteration of the panel has been designed, then antibody titration can begin with matrix and collection tube comparisons if needed. The evaluation and choice of quality control (QC) material to accompany each run of the assay is also investigated at this point. Once the panel is optimized the workflow then moves on to the validation stage. But what truly is validation? This is the process of ensuring that the method employed is robust and truly and reliably measures the biomarker(s) that are required to support drug development. Our validation packages are fit-for-purpose based on the context-of-use and follow the H62 CLSI (Clinical and Laboratory Standards Institute; PA, USA) guidelines [4] closely (there is yet to be published any guidance on FC validation from any regulatory body). In conjunction with this significant laboratory-based effort comes a huge documentation burden. Validation plans, validation reports, controlled methods, analytical plans, note-to-files and deviations all form part of any validation package. It is clear that validation is a time-consuming and expensive process and can often fall on the critical path of a successful clinical trial, especially if testing is required in-stream. So how can we simplify these workflows in the 21st century to make them faster, cheaper and easier?

Firstly, harmonization. In Cellular Biomarkers in Precision Medicine at GSK we have designed a suite of six harmonized high parameter (26–37 markers) spectral FC panels (the human biological samples used were sourced ethically and their research use was in accord with the terms of the informed consent) that are intended to be an off-the-shelf solution for 80% of asks coming from clinical teams for FC support. The panels have been designed with ‘drop-in’ slots for those asset-specific or disease-specific marker requests that come in, resulting in shorter method development times as the majority of the panel already exists. This approach will also decrease data reporting timelines as each panel shares the same common backbone of markers and has the same gating strategies applied to the top of the gating tree. This results in multiple common reportable outputs across the panels. We have specific rules on the naming of gates so that no bivariate dot plot replicated across any of our panels can have gates named differently. All of the above points help us with QC-ing our clinical sample testing data, but they also enable us to streamline the reporting part of our workflows too as I will now describe.

As mentioned earlier, it is all well and good being able to generate so much information from one sample, but what do we do with it all? This comes to my second discussion point – standardization. Standardizing markers, gating strategies and gate names means we can easily implement the recently released Cell Phenotyping (CP) domain from the Clinical Data Interchange Standards Consortium (CDISC; TX, USA), which has helped us create a reporting environment for FC results that is specific, granular and hopefully futureproof when thinking about future data mining needs. CDISC is a not-for-profit organization of volunteer experts with a simple mission – to develop and advance data standards of the highest quality. These standards aim to enable accessibility, interoperability and reusability of data for more meaningful and efficient research to maximize impact on global health [5]. It should be noted at this point that CDISC standards are required for regulatory submissions to the FDA (US) and the PMDA (Japan). Previously at GSK, FC data was reported out using the Laboratory (LB) domain, which is primarily designed for laboratory safety and efficacy data. This was an adequate solution while FC panels remained simple, with minimal numbers of markers and short lists of reportables, however, it is no longer fit-for-purpose when considering today’s complex assays. The LB domain cannot cope well with a 30-marker panel with 300 reportables, meaning squeezing FC data into the available fields in the LB domain has become time consuming, inefficient, inconsistent and results in crucial metadata being lost. The granularity required to understand if like-for-like tests were being compared across panels was difficult to maintain in the LB domain. CP fixes all of this with the advent of new variables, meaning that the full gating path and marker string can be captured, alongside information about the antibody clones used, stimulation status and type of assay employed. Capturing the marker string is critical as only then can we truly know we are comparing the same population across panels and studies (think of the different ways a regulatory T cell can be phenotyped!). We, at GSK, are now creating detailed reporting templates for each FC panel so that they can be reused across studies and provide standardization. Clinical FC is expensive; return on that investment can only be achieved if the data are fully understandable, allowing it to be used for present and future project decisions. Due to the new variables available in CP, it allows FC data to be described with the utmost clarity as no metadata are lost. This results in time savings, greater consistency and complete description of the data, which is invaluable for any future data mining.

My third discussion point is automation. We cannot truly deliver on any harmonization or standardization without also having a parallel program of automation evaluation, testing and implementation. To enable the initiatives discussed above to reach their full potential we need to also harness the power of automation. As we support clinical trials from phase I to phase III in our laboratory, we need to have the capacity to test thousands of clinical samples a year. Thus, our suite of automation solutions covers everything from antibody cocktailing and sample processing to data analysis and data merging (e.g., merging data values with CDISC codes). However, each platform has its own pros and cons and so we are continually evaluating new tools that are either faster, more user friendly, waste less reagents or are more compliant. This is not a space where we can afford to be static so the automation technology we use will likely change multiple times over the near future until the best end-to-end solution can be found. Indeed, in parallel to taking a modular approach using smaller off-the-shelf pieces of equipment, we are designing a bespoke system that means intervention by scientists in the lab will be minimal from the second a clinical sample arrives on site. Couple this laboratory automation with automated analytical tools such as auto-gating and we significantly increase our testing capacity. Yet, working in a regulated environment does bring with it some restrictions when looking for new lab toys. Each computer system that we decide to take forward will also need validating to ensure the correct level of compliance surrounding, amongst other things, controlled user access, audit trails and other data integrity requirements. So, we do need to bear this in mind when making decisions. Combining all these threads together creates our vision for a new frontier in FC clinical sample analysis; generating high quality, robust, decision-making data delivered at the right time to drive forward GSK’s portfolio of medicines and ultimately benefit those patients that are waiting for new therapies.

And here we come to the end of a whistle-stop tour of high dimensional FC panels in a GCP environment. There are many other smaller pieces to this jigsaw as it is a complicated field, but it is one we are striving to simplify and improve, and I thank all of my talented and diligent team members and colleagues that are collaborating with me to achieve that same goal.

Disclaimer: the opinions expressed are solely my own and do not express the views or opinions of my employer, Bioanalysis Zone or Taylor & Francis Group .

References

  1. Konecny AJ, Mage PL, Tyznik AJ, Prlic M & Mair F. OMIP-102: 50-colour phenotyping of the human immune system with in-depth assessment of T cells and dendritic cells. bioRxiv. doi: 10.1101/2023.12.14.571745 (2024) (Epub ahead of print).
  2. Hickford ES, Dejager L, Yuill D et al. A biomarker assay validation approach tailored to the context of use and bioanalytical platform. Bioanalysis, 15(13), 757–771 (2023).
  3. Roederer M & Tárnok A. OMPIs – orchestrating multiplexity in polychromatic science. Cytom. A. 77(9), 811–812 (2010).
  4. Clinical Laboratory Standards Institute (CLSI). Validation of assays performed by flow cytometry (1st Edition). CLSI guideline H62. (2021): https://clsi.org/standards/products/hematology/documents/h62/ [Accessed 10 June 2024].
  5. Clinical Data Interchange Standards Consortium (CDISC). Standards: www.cdisc.org/ [Accessed 10 June 2024].