Why do we use the negative binomial distribution for analysing RNAseq data?
This post is in reference to a workshop held at UTHSC about methodologies in RNAseq. One issue that was discussed was why tools such as DESeq, Cuffdiff and EdgeR use a negative binomial distribution with generalized linear models to determine significance. Our lab currently uses the following pipeline to analyse our data:
- Tophat/Bowtie to align the reads to a GENCODE genome version (We are currently testing using kallisto instead of tophat2/bowtie/htseq)
- HTSeq to annotate reads to known genomic features
- DESeq2 to perform differential expression analyses
First of all, since reads are count based, they can't be normally distributed (you can't have -3 counts, or 12.2 counts). Two distributions for count based data are poisson (which presumes the variance and mean [ie expression in our case] are equal) or negative binomial (which does not). This is especially a problem when the number of biological replicates are low because it is hard to accurately model variance of count based data if you are looking at only that gene and making the assumptions of normally distributed continuous data (ie a t-test). A good estimate of variance for each gene is essential to determine whether the changes are due to chance.
Most RNAseq tools allow for the global variance (ie for a gene in 'general') to be estimated for a certain read density. This generates a more accurate estimate of the variance than looking at each individual gene's (potentially small n) distribution because you can make some assumptions in general about how low expressing and high expressing genes vary.
This variance parameter is then used to model the NB distribution for each gene (because it allows for expression and variance to be unlinked) and therefore to more accurately estimate the error term. As the number of samples increases, the local (gene-specific) variance can be better estimated (but the expression will generally follow a skewed, stochastic and non-normal distribution).