Skip to main content

ICS News Archive

If looking for the latest news, go here

Return to News Archive List
March 22, 2016

Utts spearheads efforts to improve statistical accuracy

Professor and Chair of Statistics Jessica Utts is fighting for a more accurate use of statistics in her tenure as president of the American Statistical Association (ASA), starting with the organization’s recently released statement against the misuse of p-values, the first of its kind in the association’s 177 years.

The “Statement on Statistical Significance and P-Values” outlines six principles underlying the proper use and interpretation of statistical P values, one of the most commonly used determinants of statistical significance. These principles are:

  1. P-values can indicate how incompatible the data are with a specified statistical model.
  2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
  3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
  4. Proper inference requires full reporting and transparency.
  5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
  6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
“The ASA releases this guidance on p-values to improve the conduct and interpretation of quantitative science and inform the growing emphasis on reproducibility of science research,” a news release on the statement reads. “The statement also notes that the increased quantification of scientific research and a proliferation of large, complex data sets has expanded the scope for statistics and the importance of appropriately chosen techniques, properly conducted analyses, and correct interpretation.”

The ASA released the statement in light of a growing struggle to reproduce statistical research—an important step in the scientific method. Statisticians and other researchers, the ASA argues, shouldn’t rely solely on p-values to determine the strength of their findings, or else they risk what Nature calls “falsely robust” results. Rather, quality statistical research will report findings on a variety of criteria.

“Over time it appears the p-value has become a gatekeeper for whether work is publishable, at least in some fields,” Utts says. “This apparent editorial bias leads to the ‘file-drawer effect,’ in which research with statistically significant outcomes are much more likely to get published, while other work that might well be just as important scientifically is never seen in print. It also leads to practices called by such names as ‘p-hacking’ and ‘data dredging’ that emphasize the search for small p-values over other statistical and scientific reasoning.”

According to Utts, while the contents of the ASA statement and the reasoning behind it are not new, this is the first time that the statistics community, as represented by the ASA Board of Directors, has issued a statement to address these issues.