Skip to main content

Using controls to limit false discovery in the era of big data.

Author(s): Parks, Matthew M; Raphael, Benjamin J; Lawrence, Charles E

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr15829
Full metadata record
DC FieldValueLanguage
dc.contributor.authorParks, Matthew M-
dc.contributor.authorRaphael, Benjamin J-
dc.contributor.authorLawrence, Charles E-
dc.date.accessioned2021-10-08T19:47:06Z-
dc.date.available2021-10-08T19:47:06Z-
dc.date.issued2018-09-14en_US
dc.identifier.citationParks, Matthew M, Raphael, Benjamin J, Lawrence, Charles E. (2018). Using controls to limit false discovery in the era of big data.. BMC bioinformatics, 19 (1), 323 - ?. doi:10.1186/s12859-018-2356-2en_US
dc.identifier.issn1471-2105-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr15829-
dc.description.abstractBACKGROUND:Procedures for controlling the false discovery rate (FDR) are widely applied as a solution to the multiple comparisons problem of high-dimensional statistics. Current FDR-controlling procedures require accurately calculated p-values and rely on extrapolation into the unknown and unobserved tails of the null distribution. Both of these intermediate steps are challenging and can compromise the reliability of the results. RESULTS:We present a general method for controlling the FDR that capitalizes on the large amount of control data often found in big data studies to avoid these frequently problematic intermediate steps. The method utilizes control data to empirically construct the distribution of the test statistic under the null hypothesis and directly compares this distribution to the empirical distribution of the test data. By not relying on p-values, our control data-based empirical FDR procedure more closely follows the foundational principles of the scientific method: that inference is drawn by comparing test data to control data. The method is demonstrated through application to a problem in structural genomics. CONCLUSIONS:The method described here provides a general statistical framework for controlling the FDR that is specifically tailored for the big data setting. By relying on empirically constructed distributions and control data, it forgoes potentially problematic modeling steps and extrapolation into the unknown tails of the null distribution. This procedure is broadly applicable insofar as controlled experiments or internal negative controls are available, as is increasingly common in the big data setting.en_US
dc.format.extent323 - ?en_US
dc.languageengen_US
dc.language.isoen_USen_US
dc.relation.ispartofBMC bioinformaticsen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleUsing controls to limit false discovery in the era of big data.en_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1186/s12859-018-2356-2-
dc.identifier.eissn1471-2105-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
DiscoveryBigData.pdf648.1 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.