The Hub

The Community of Minds

  • The Hub
  • About TheCOM™
    • Testimonials
    • About The Founder: Bethany Huot
  • TheCOM Center for Educative Research™
    • Educative Research™
    • The BIOME Project
  • FAQ
  • The Whiteboard
  • Strategic Career Management (SCM)
    • SCM: Identify
    • SCM: Defining The Void
    • SCM: Commit
    • SCM: Community Perspectives
  • The Resources
    • Digital Identity Management
    • Networking & Science Communication (#SciComm)
    • Writing & Peer Review
    • Bioinformatics & Statistics
    • Methods & Technologies
    • Teaching & Learning (T&L)
      • T & L Communities
      • T & L Training Programs/Fellowships
      • T & L Career Path Prep
      • T & L Tools & Resources
    • Career Prep
    • Job Hunting
  • The Vault (Archive)
    • The File Cabinet
      • The Pub Club Files:
        • The News
        • The Pub Club
          • The Mission
          • The People
          • The Mug Club
            • The Coaster Club
          • The Python Group
          • The Publications
            • Favorite Pubs
            • Papers of Interest…
            • Scoop.it
        • 2017 Summer – Summaries & Docs
        • 2017 Spring – Summaries & Docs
        • 2016 Fall – Summaries & Docs
        • 2016 Summer – Summaries & Docs
        • 2016 Spring – Summaries & Docs
        • 2015 Fall – Summaries & Docs
        • 2015 Summer – Summaries & Docs
        • 2015 Spring – Summaries & Docs
        • 2014 Fall – Summaries & Docs

Science Alert: Statistics in Biology

  • Science Says (The COM)
  • The COM (The COM)
  • The Void (The COM)
  • Training (The COM)

post by: Bethany Huot

I have recently come across several different stories on the use of and requirements for statistics in biology that caught my attention and prompted a bit of digging.  The first was the email from Sheng Yang (see post “Nature and Statistics”) on Nature‘s new policy for how data should be presented in future publications.  In the process of looking for an electronic link for the checklist, I saw this editorial on Nature‘s website:

Announcement: Reducing our irreproducibility                (See Nature’s link for full editorial)

“Over the past year, Nature has published a string of articles that highlight failures in the reliability and reproducibility of published research (collected and freely available at go.nature.com/huhbyr). The problems arise in laboratories, but journals such as this one compound them when they fail to exert sufficient scrutiny over the results that they publish, and when they do not publish enough information for other researchers to assess results properly.”

There seems to be a move in this direction, as Scientific American reports:

Major Scientific Journal Joins Push to Screen Statistics in Papers It Publishes (Scientific American’s link)

Science‘s new policy follows efforts by other journals to bolster standards of data analysis

July 6, 2014 |By Richard Van Noorden and Nature magazine

Nature is also providing resources to aid in improving the use of statistics in biology:

“Since September 2013 Nature Methods has been publishing a monthly column on statistics called “Points of Significance.” This column is intended to provide researchers in biology with a basic introduction to core statistical concepts and methods, including experimental design.”

Is this problem new, and how extensive is it?

Here is an excerpt from the July/Aug 2010 Discover Magazine on the subject:

Why Scientific Studies Are So Often Wrong: The Streetlight Effect (article link)

Researchers tend to look for answers where the looking is good, rather than where the answers are likely to be hiding.

By David H. Freedman | Friday, December 10, 2010

The fundamental error here is summed up in an old joke scientists love to tell. Late at night, a police officer finds a drunk man crawling around on his hands and knees under a streetlight. The drunk man tells the officer he’s looking for his wallet. When the officer asks if he’s sure this is where he dropped the wallet, the man replies that he thinks he more likely dropped it across the street. Then why are you looking over here? the befuddled officer asks. Because the light’s better here, explains the drunk man.

That fellow is in good company. Many, and possibly most, scientists spend their careers looking for answers where the light is better rather than where the truth is more likely to lie. They don’t always have much choice. It is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant. After all, we expect scientists to quantify their observations precisely. As Lord Kelvin put it more than a century ago, “When you can measure what you are speaking about, and express it in numbers, you know something about it.”

streetlight

Of course, as researchers we understand the challenges associated with not always being able to measure what it is we want and having to make do with what we can.  Another site builds on the constructive criticism of the Discover article with this advice:

“Start Under the Streetlight, then Push into the Shadows” (article link)

The last piece I came across was the email yesterday regarding the upcoming seminar by Dr. John Ioannidis (see post “Upcoming Seminars of Interest), whose 2005 essay on “Why Most Published Research Findings Are False,” currently has over 2700 citations.  In this essay Dr. Ioannidis states:

“Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

Given our unprecedented opportunity this week of having 6+ PIs and multiple research groups all coming together at our first Pub Club Open House, I thought it would be a timely occasion for us to discuss this topic, how it applies to us and what we can do to start pushing into the darkness to find those BIG ideas.

Share this:

  • Tweet
  • Email
  • Share on Tumblr
reproducible research scientific method statistics the COM The Void training Writing
April 22, 2015 TheCOM

Post navigation

Peer review, in a nutshell → ← The future of the postdoc

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Pages

  • The Hub
  • About TheCOM™
    • Testimonials
    • About The Founder: Bethany Huot
  • TheCOM Center for Educative Research™
    • Educative Research™
    • The BIOME Project
  • FAQ
  • The Whiteboard
  • Strategic Career Management (SCM)
    • SCM: Identify
    • SCM: Defining The Void
    • SCM: Commit
    • SCM: Community Perspectives
  • The Resources
    • Digital Identity Management
    • Networking & Science Communication (#SciComm)
    • Writing & Peer Review
    • Bioinformatics & Statistics
    • Methods & Technologies
    • Teaching & Learning (T&L)
      • T & L Communities
      • T & L Training Programs/Fellowships
      • T & L Career Path Prep
      • T & L Tools & Resources
    • Career Prep
    • Job Hunting
  • The Vault (Archive)
    • The File Cabinet
      • The Pub Club Files:
        • The News
        • The Pub Club
          • The Mission
          • The People
          • The Mug Club
            • The Coaster Club
          • The Python Group
          • The Publications
            • Favorite Pubs
            • Papers of Interest…
            • Scoop.it
        • 2017 Summer – Summaries & Docs
        • 2017 Spring – Summaries & Docs
        • 2016 Fall – Summaries & Docs
        • 2016 Summer – Summaries & Docs
        • 2016 Spring – Summaries & Docs
        • 2015 Fall – Summaries & Docs
        • 2015 Summer – Summaries & Docs
        • 2015 Spring – Summaries & Docs
        • 2014 Fall – Summaries & Docs
copyright 2021 Bethany Huot/TheCOM,LLC / Powered by WordPress | theme SG Double