post by: Bethany Huot
I have recently come across several different stories on the use of and requirements for statistics in biology that caught my attention and prompted a bit of digging. The first was the email from Sheng Yang (see post “Nature and Statistics”) on Nature‘s new policy for how data should be presented in future publications. In the process of looking for an electronic link for the checklist, I saw this editorial on Nature‘s website:
Announcement: Reducing our irreproducibility (See Nature’s link for full editorial)
“Over the past year, Nature has published a string of articles that highlight failures in the reliability and reproducibility of published research (collected and freely available at go.nature.com/huhbyr). The problems arise in laboratories, but journals such as this one compound them when they fail to exert sufficient scrutiny over the results that they publish, and when they do not publish enough information for other researchers to assess results properly.”
There seems to be a move in this direction, as Scientific American reports:
Major Scientific Journal Joins Push to Screen Statistics in Papers It Publishes (Scientific American’s link)
July 6, 2014 |By Richard Van Noorden and Nature magazine
Nature is also providing resources to aid in improving the use of statistics in biology:
“Since September 2013 Nature Methods has been publishing a monthly column on statistics called “Points of Significance.” This column is intended to provide researchers in biology with a basic introduction to core statistical concepts and methods, including experimental design.”
Is this problem new, and how extensive is it?
Here is an excerpt from the July/Aug 2010 Discover Magazine on the subject:
“Start Under the Streetlight, then Push into the Shadows” (article link)
The last piece I came across was the email yesterday regarding the upcoming seminar by Dr. John Ioannidis (see post “Upcoming Seminars of Interest), whose 2005 essay on “Why Most Published Research Findings Are False,” currently has over 2700 citations. In this essay Dr. Ioannidis states:
“Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”
Given our unprecedented opportunity this week of having 6+ PIs and multiple research groups all coming together at our first Pub Club Open House, I thought it would be a timely occasion for us to discuss this topic, how it applies to us and what we can do to start pushing into the darkness to find those BIG ideas.