To conclude our series on rare event analysis, it is time to discuss the statistics behind rare event analysis. The first 2 parts of this series covered the hardware aspects of measuring rare events and some specific recommendations for gating/analysis of rare events.

It is necessary to sort through hundreds of thousands or millions of cells to find the few events of interest.

**With such low event numbers, we move away from the comfortable domain of the Gaussian distribution and move into the realm of Poisson statistics. **

There are 3 points to consider to build confidence in the data that the events being counted are truly events of interest and not random events that just happen to fall into the gates of interest.

## 1. How do you know if an event is real?

How do you know that your rare event is real? When subsetting the population, you might have an occurrence rate of 0.1% or lower. This means that for every 100,000 cells, 100 cells or fewer will be in the final gate of interest.

How can you confirm and be comfortable they are real?

In Poisson statistics, the number of positive events is the important factor, not the total number of events.

**In Poisson statistics, the mean and variance of the distribution are equal to the number of positive events. The standard deviation is the square root of the variance.**

So, if you have 2 events in a region the CV of that data is roughly 71%, whereas with 100 events, the CV drops to about 10%.

But, what does this really mean?

In this paper by Maecker and coworkers, the authors looked at inter-lab CV in flow cytometry experiments, and estimated it was high as 40%, some of which could be reduced by centralizing analysis. More importantly, the inter-lab CVs were highest (57-82%) on samples where the average percentage of cells was below 0.1%.

And, with as few as 12 events, that leads a standard deviation of 28%.

The frequency precision of our measurement is now dominated by assay errors, not the rare events that are analyzed.

With rare event analysis, you demonstrate significance through assay reproducibility. The number of samples that should be measured can be determined using the power calculation, which is discussed in more detail here.

## 2. How many total events do you need?

Statistically, assay variation can be a major source of error for this analysis. That leads to the question, “How many events do you need?”

**The short answer is that whenever possible, collect as many events as possible.**

As discussed previously, there may be limitations imposed by the hardware and software which limit how much data can be collected in a single file. This means you may have to collect multiple files from the same tube.

In third party software, it is possible to do some preliminary gating to reduce file size, and concatenate multiple files after this preliminary analysis to make the final gating more complete.

Turning back to how many events is enough, there is more than one n. To show the significance of the data, the analysis must be repeated multiple times (i.e. power the experiment appropriately) and have the correct complement of appropriate negative controls.

The data above shows both of these concepts. On the left, is the gating strategy and the control (normal patient control), while on the right are the results of several analysis runs on 2 patients to show the differences between the 2 populations.

The statistical analysis between these 2 show that there is a significant difference, as denoted by the asterisk.

**Returning to the question of how many events is enough, the question to ask is, “What is the CV required for analysis — what spread of the data is acceptable?”**

Is 10,000 events enough?

The chart below shows the coefficient of variation (CV) value for a given frequency of cells. The CV is related to the number of positive events, and is defined as the SD/mean.

In general, a lower CV is better. The CV is another way to express the precision and repeatability of an experiment.

Using this table, if a broad CV is acceptable, with a cell frequency of 0.1% 10,000 events is enough. However, a 10% CV requires 100 positive events and you can see now that 10,000 events is only good if at the 1% range.

For very rare events, a 10% CV for very rare populations requires collection of a million, or even 10 million, total events. Collecting at a rate of 10,000 events per second, it would take 1,000 seconds to collect 10 million events, or 16 minutes.

The CV is going to relate to the ability to identify a difference between 2 populations. This, in turn, will be related to the power of the experiment. Since we have the standard deviation of the population, it makes the calculations easier. However, the difference between the control and experimental will drive this.

In this paper by Mario Roederer, he discusses this issue of how many events you need to know if something is real. According to this paper, one of the important things to do is compare your positive sample to a set of controls so that you can interpret the data correctly.

There’s no arbitrary number of events that is the “right” number.

Even 12-14 positive events may be accurate, based upon your knowledge and the data generated by your controls.

## 3. How do you sort rare events?

The ability to sort cells for downstream applications is one of the most powerful applications of flow cytometry. Poisson statistics again play a role in determining an appropriate event rate.

If the drop drive frequency is 80 kHZ, or 80,000 droplets being generated per second, how many events per second should you run? Remember that a cell sorter sorts droplets, not cells, per se, but the cells are contained within the drops.

Depending on the sort envelope, the sort decision can include 1 or 2 droplets. So, what is a reasonable event rate?

When the event rate is equal to the drop drive frequency, Poisson statistics predict that a little under 40% of the drops will have no cells, about 40% will have 1, about almost 20% will have 2, and about 5% will have 3 or more cells.

If the event rate is ½ the drop drive frequency, 7.5% of the droplets will have 2 cells. When the event rate is ¼ the drop drive frequency, about 80% of the droplets are empty and about 2% of the drops will have 2 events. Going to ⅙ the drop drive frequency, the improvements are minimal.

**So, if the drop drive frequency is 40 kilohertz or 40,000 droplets per second, the event rate should be no more than 10,000 events per second.**

What does this mean practically?

This chart can help you determine how long it will take to sort, based on the frequency of the drop delay and the frequency of population, assuming 100,000 cells are needed for a downstream application.

Sort operators are often asked if there is a way to reduce the time it takes to sort, especially with a rare event population. Since there is a ceiling on event rates, our only option is to enrich the sample to increase the proportion of desired events.

This can be done using a depletion assay with magnetic beads from Miltenyi Biotec, the IMAG system, Dynabeads, and others.

In these systems, cells are tagged with an antibody label conjugated to a magnetic bead then exposed to a magnet. The cells that are labeled are held by the magnet, the cells that are not labeled stay in suspension or pass through the column for collection and downstream sorting.

Let’s look at when it might be helpful to incorporate this pre-sort enrichment step.

Starting with 100 million cells and a desired population at 0.01%, if you took those 100 million cells and sorted them at 20,000 events per second, it would take about 83 minutes to do the whole sort.

If we take those same 100 million cells and perform a magnetic bead enrichment, which will take about 45 minutes using one of the various magnetic isolation kits, the untouched cells will be about 10 million cells and the population of rare events is enriched to 0.1%.

Sorting those 10 million cells at 20,000 events per second will only take about 10 minutes, compared to 83 minutes pre-magnetic bead enrichment. The faster sort means that your cells are going to be healthier because you can get them back into culture, or into whatever buffer they need, more quickly.

After all the tweaking of the hardware and optimizing the data analysis, the statistics must be considered. Poisson statistics dominate rare event analysis. From determining how many cells to collect, to how fast to sort cells, the number of positive events is critical for determining the statistics involved. The charts and data in this blog can help design your next rare event analysis experiment, and help provide the basis for improving reproducibility and consistency of the experiments.

**To learn more about Statistical Challenges Of Rare Event Measurements In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.**

### Tim Bushnell

My other passions include grilling, wine tasting, and real food. To be honest, my biggest passion is flow cytometry, which is something that Carol and I share. My personal mission is to make flow cytometry education accessible, relevant, and fun. I’ve had a long history in the field starting all the way back in graduate school.

#### Latest posts by Tim Bushnell (see all)

- Reproducibility In Flow Cytometry Requires Correct Compensation - September 19, 2018
- Best Practices In Flow Cytometry Compensation Methodologies - September 5, 2018
- The Need For Speed In Flow Cytometry Data Analysis - August 22, 2018