The NIH has released a series of reproducibility guidelines that scientists must address. These guidelines have been introduced because there’s a lot of data showing that reproducibility in science is frustratingly lacking in certain experiments.
Non-reproducible experiments waste time, money, and resources. As Begley and Ioannidis cite in their 2015 article :
“The estimates for irreproducibility based on these empirical observations range from 75% to 90%. These estimates fit remarkably well with estimates of 85% for the proportion of biomedical research that is wasted at-large.”
Reproducibility is a mindset, and involves an overall analysis of the scientific process to identify the areas that can be improved. In this article by Bruce Booth, he reviews “Begley’s six rules”. Two of these rules focus on the controls and reagents used in the experiment.
Of equal importance are the instruments used to make measurements. For example, how often are the pipettes calibrated? Are all lab members adequately trained in technique? This chart from Gilson is a useful one to have i ...Read More
With the increased focus on reproducibility of scientific data, it is important to look at how data is interpreted. To assist in data interpretation, the scientific method requires that controls are built into the experimental workflow. These controls are essential to minimize the effects of variables in the experiment so that changes caused by the independent variable can be properly elucidated. In fact, one of Begley’s 6 rules, as described by Bruce Booth, asks if the positive and negative controls were both shown.
What types of controls should be considered when designing a flow cytometry experiment?
Focus controls to minimize confounding variability. Sample processing, for example, can be controlled using a reference control. Where to properly set gates can be addressed using the FMO control. Controls for treatment can include Unstimulated and Stimulated controls. Reagent controls ensure that the reagents are working, and are at the correct concentration. Compensation controls are critical — these have been discussed in detail elsewhere. Of course, there are some controls th ...Read More
To conclude our series on rare event analysis, it is time to discuss the statistics behind rare event analysis. The first 2 parts of this series covered the hardware aspects of measuring rare events and some specific recommendations for gating/analysis of rare events.
It is necessary to sort through hundreds of thousands or millions of cells to find the few events of interest.
With such low event numbers, we move away from the comfortable domain of the Gaussian distribution and move into the realm of Poisson statistics.
There are 3 points to consider to build confidence in the data that the events being counted are truly events of interest and not random events that just happen to fall into the gates of interest.
1. How do you know if an event is real?
How do you know that your rare event is real? When subsetting the population, you might have an occurrence rate of 0.1% or lower. This means that for every 100,000 cells, 100 cells or fewer will be in the final gate of interest.
How can you confirm and be comfortable they are real?
In Poisson statistics, the number of positive events ...Read More
Having dealt with the hardware issues related to rare event analysis in the first part of this series, it is time to turn to our second focus: how samples are prepared.
Stem cells, circulating tumor cells, and minimal residual disease in cancer patients were all discovered through the power of rare event flow cytometry. When preparing for rare event analysis, sample preparation and data analysis must be taken into account at the beginning.
How will we stain our cells? How will we analyze our cells? What controls will we use to help us identify our rare events? What statistical methods do we use to analyze our results? Here are 6 procedural limitations that impact the quality of rare event flow cytometry data and how to optimize your assay to get the best results possible.
1. Staining your cells.
In every lab, there is the “Notebook”, the collection of time-tested protocols handed down from the PI and guarded by the senior technician. In the confines of the “Notebook” is the “Protocol” to stain cells for flow cytometry. What worked 30 years ago is good enough for today, r ...Read More
“Not everything that can be counted counts and not everything that counts can be counted.” — William Bruce Cameron (but often misattributed to Albert Einstein)
What does this quote mean in terms of flow cytometry? Flow cytometry can yield multi-parametric data on millions of cells, which makes it an excellent tool for the detection of rare biological events — cells with a frequency of less than 1 in 1,000.
With the development and commercialization of tools such as the Symphony, the ZE5, and others which can measure 20 or more fluorescent parameters at the same time, researchers now have the ability to characterize miniscule population subsets that continue to inspire more and more complex questions.
When planning experiments to detect — and potentially sort — rare events using flow cytometry, we need to optimize our hardware to ensure that optimal signals are being generated and that rare events of interest are not lost in the system noise. This noise is also exacerbated by poor practices when running the flow cytometer.
There are 3 areas of hardware limitations that we ...Read More