a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by mk

Speaking in a scientific sense, I would say that is an underestimation.

EDIT: Sorry, I was interrupted mid comment.

IMO somewhere around 50% of the submitted papers contain sloppy science, if not some degree of deception. That is, the authors being selective about what data they report. For example, if you run an experiment 4 times, and three of those times it works in a similar way, how do you treat that fourth data set? Can you write it off to some sort of experimental mistake or oversight? Did you mix up the concentrations, or were the cells somehow different to begin with? When you get funded or get fired, you drop that data set, and hope the other three are meaningful. There is no time or reward for figuring it out. Actually, you are punished for doing so. If you include that data set, idiot reviewers will say that your data is not 'significant' and not let you publish your results.

In addition to issues like these, studies are often underpowered, and poorly analyzed, and poorly designed to begin with. Often they aren't even asking the right question. I said to my boss the other day: If you knew a store was getting hit by shoplifters a lot, what would you say is the mechanism? We ask stupid questions like this in science all the time. Someone cuts off the hands off all the people entering the store, and sees that there is far less shop-lifting. Aha! It was their hands, those are shoplifting appendages!

That said, in time, progress is made, and some sort of scientific truth prevails. The system is not optimized for its elucidation, however.





caeli  ·  3423 days ago  ·  link  ·  

    For example, if you run an experiment 4 times, and three of those times it works in a similar way, how do you treat that fourth data set? Can you write it off to some sort of experimental mistake or oversight? Did you mix up the concentrations, or were the cells somehow different to begin with? When you get funded or get fired, you drop that data set, and hope the other three are meaningful. There is no time or reward for figuring it out. Actually, you are punished for doing so. If you include that data set, idiot reviewers will say that your data is not 'significant' and not let you publish your results.

Yup. Null results are almost never publishable, which is ridiculous. Every lab I've been in has dozens of old experiments that were never published because they "didn't work", which makes you wonder about any paper you read. If there are plenty of unpublished versions of one study with a null result, then a published study of the same experiment with a result probably found a difference by chance. But we have no way of knowing whether an identical experiment has been done before and by how many people and how many times unless it's published. But you can't publish null results and the cycle continues.

TaylorS1986  ·  3422 days ago  ·  link  ·  

I wanted to go into psych research but stuff like this really turned me off of it and I decided to go into clinical psych, instead. "Publish or Die" is terrible for science.