(Tried to come up with a more descriptive title than the original)
I'm not sure how accurate this is. This article focuses pretty much only on statistics, but has very little to say about research design, which is just as important as statistics. If you have a terribly designed experiment, you can't make valid inferences from your results even if your other methods are airtight. This paper in Frontiers has a good discussion of some of the methods problems in a parapsychology paper (the discussion of expectation bias on pages 3-4 is particularly relevant here.)Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored – that they are unfairly being held to higher standards than everyone else.
I think stuff like this should give those who consciously or unconsciously believe in "Scientism", the notion that the only real truths and facts are empirical scientific ones, some pause. We don't really see the world as it is, only as it is presented to the conscious mind. We can't really know if the order we think we see in the universe is really "out there", as most people usually assume, or, as Kant believed, is simply the result of out our minds/brains order order our sensory perceptions. We BELIEVE that certain statistical methods are a valid way to determine "fact" from subjective anecdote and bias, but is that really true?
Well, biased or not, I take science over magical beings that are their own son any day.
That is a very edgy and simplistic dismissal of almost 2000 years of theology and philosophy, there.
Kant was wrong however. Inher it in as he did the medieval sceptical assumption that reality is 'behind' what we see. Reality is nothing more than the ground of the logical structure of experience - and a structure that cannot be reduced to experience or equivocated with it. The phenomenalist assumption is based on a genetic fallacy that confuses the conditions for knowing something (epistemic possibility) with the conditions for something being the case (truth). Science is one kind of activity which clarifies and exposes the logical structure of the world, there is no deep metaphysical problem here. Scientism is better opposed on realist grounds, rather than phenomenalist ones: idealism is a very poor way of being anti-reductionist. Not stupid 'behind the viel' realism whinch has mostly been an idealist strawman, but a simple ordinary sort of realism which says 'if has properties, it exists' - taking this to be a definition of 'exists' rather than an argument.
Haha. He doesn't believe in superstition from people thousands of years ago. So close-minded.
If your point is that science is not as reliable as some people give it credit for, I am totally with you. With that said, it sounds like you are suggesting that there is a way of knowing things that is more reliable than, or at least as reliable as, science. Is that what you are saying? If so, then what and how?
An issue with scientistic types is that they think that scientific truths are the ONLY truths, and that other kinds of truths that do not overlap with scientific truths, like subjective personal truths, literary/mythological truths, and spiritual truths, either do not exist or are not really truths. With the later two there is reluctance in our modern secular society to see myth and spiritual experiences as "truths" because of the modern cultural struggle against religious fundamentalists who wrongly treat them as scientific truths.
The terms you are using sound like they could be interpreted many different ways. What would you say is the difference between a scientific truth and any of the other truths you are describing, and how can the other ones be known/detected?
EDIT2: This comment has started a bit of an unrelated debate, so I would like to quickly clarify that my original complaint in this comment was more of a semantic issue that I had with the author repeatedly saying "science" when he meant psychology/medicine specifically. This is something that I've seen a lot from people in some fields more than others and it is a pet peeve of mine. Original comment: It's important to note that this is an indicator for social sciences like psychology, but not really for harder sciences like Physics/Astronomy/Chemistry. It always annoys me that people just say "science" when talking about soft sciences as if the shortcomings there apply to the hard sciences as well. This shows that when studying people, it's very easy to do it wrong and to get bad results and false positives. This does not say anything about harder sciences. This doesn't mean that things like climate change could just be a placebo effect. Human biases don't change thermometers, but they might change more the subjective criteria of a softer science field. EDIT: The important thing that needs to be remembered here is that these fields operate differently. In these softer sciences (especially psychology) the only evidence comes from "doing things to people and watching what happens". The problem here is that people are very complicated, and it's easy to fuck it up and accidentally include a bias. With harder sciences, we know more about the system, can get a more solid mathematical / theoretical foundation that can predict things, and can approach situations from a larger variety of observational vantages to get a fuller picture.
I understand the idea, but the problem is that chemists are themselves soft. People who study the "hard" sciences are just as subject to bias and error as sociologists. From Cargo Cult Science: Someone plotted it. One might argue that human behavior is so much more complicated than anything that can happen in a Petri dish that more errors in social sciences are inevitable, but (1) I am not convinced that this is true and (2) it assumes that experimenters do not account for complexity when drawing conclusions in their work. In practice, we always begin with a personal judgement about the reliability of the evidence we observe, so we never escape the ouroboros.social sciences like psychology, but not really for harder sciences
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.
You trying to draw a line between them shows you don't understand science as well as you think you do. The same shortcomings do apply to hard sciences as well. There is a reason why double blind studies have so much significance attached to them. There is a reason why people look to see if studies have been independently verified and why attempts are made to verify them. Biases of the people performing the studies can affect the results no matter what is being studied. (Of course the evidence in favour of Anthropomorphic Global Warming is overwhelming to the point of being irrefutable.)
I'm not saying that the line is anywhere near distinct, but the extremes of the hard/soft science spectrum are intrinsically and fundamentally different. While physics has a good mathematical and theoretical backing, and the experiments and the theory build off of each other and check each other's power, you don't have so much variety in sources of information in softer sciences. You mention double blind studies, but again, those only exist in softer sciences because of the systematics and complexity of dealing with human/biological subjects. The article mentions meta studies as the top of the evidencial pyramid, but they don't exist in harder sciences because they don't make any sense in that context. Of course all experiments are subject to systematics. I would never claim anything different. However the point that I raised in my edit (posted before your reply) still applies. The very grounds on which science is done is fundamentally different in different fields because of the type of research possible and the types of data available. For example, in Astronomy (my field), you can't really set up an experiment where you create a nebula and watch it form into a start system. You also can't watch one system go through its entire life because of the timescales involved. Experiments don't really work on this scale. Instead, you have to rely on observational and theoretical techniques to study how things work. Further, in Astronomy (and chemistry and atmospheric science and others) you have physics to fall back on to predict how things will happen. (EDIT: To clarify, this was my point with the climate change example in the first post. The human researcher's beliefs won't change what the thermometer will read and it won't change how the winds will blow. They might change how another human will react. An important note is that they might also impact which data gets recorded, for example if the researcher takes Christmas off every year they might miss something important in the data at that time, or if the researcher only takes data once every few days they might miss some of the smaller scale/period signals etc.) The physics and the math don't change because of the placebo effect. Unfortunately, people do. In harder fields, you study these more objective things, in softer fields you study softer and more subjective things that can be more easily influenced. Again, I'm not saying that there is no room for misinterpretation or for systematics (although in harder sciences, the systematics are more physical and quantifiable in nature), just that many of the specific grievances in the article are less applicable to a harder science. That said, there are still human/researcher biases to take into account with the harder sciences, they're just very different.
Meta analyses absolutely are done in hard science fields. Use of statistics and mathematical backing is very much done in social science fields. Biology makes use of blinded studies. Physics has used some blinded studies as well. The amount of heterogeneity between hard sciences and soft sciences is virtually identical.
Interesting read in the citation. Yes I concede that many physics papers will compile the results of previous groups to get more realistic numbers as a combination of their work and experiments. Although it isn't called a meta analysis in hard science fields, it is basically the same thing. Forgive me for not thinking about that very thoroughly, it's 6 AM for me and I've been awake for a while :) From my experience with the soft sciences (which is admittedly not much beyond an undergraduate level, unlike hard sciences), their mathematical backing is less rigorous than you would find in, say physics. You won't really find any mathematical proofs in a biology paper. All fields APPLY math to model and statistics to approximate errors and variations, but only the harder sciences USE mathematical logic and proof structure in order to predict a relationship between unstudied things from mathematical/geometrical underpinnings of the universe. The softer fields use the math to explain, but only the harder fields can use it to predict and to build off of the previous knowledge base. If you have any counterexamples to the things in my last paragraph I would love to see them, for I know of none.
My original complaint was more of a semantic issue that I had with him repeatedly saying "science" when he meant psychology/medicine specifically. This is something that I've seen a lot from people in some fields more than others and it is a pet peeve of mine.
That was a great read. > I think it’s a combination of subconscious emotional cues, subconscious statistical trickery, perfectly conscious fraud which for all we know happens much more often than detected, and things we haven’t discovered yet which are at least as weird as subconscious emotional cues. So it's a bit of everything, then.
> 5. Stricter p-value criteria. It is far too easy to massage p-values to get less than 0.05. Also, make meta-analyses look for “p-hacking” by examining the distribution of p-values in the included studies. > 10. Stricter effect size criteria. It’s easy to get small effect sizes in anything. I agree with the rest, but the commandment 10. does not seem right. You can (5) require high p-values as high as you want. But if you find even the tiniest of effects that does not go away, that is important. In fact, I would say "It’s easy to get small effect sizes in anything" is a misunderstanding (unless the author is claiming the effect due to chance, in which case, the required p-value should be higher, not the effect size).
I found this article to summarize how I view the problem: http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble . It's not just parapsychology, it's the way we do science today. To summarize, a negative resulting study will more likely not than get published, this has created a meta-statistical crisis. If one uses a 95% confidence interval, 1 in 20 studies will have an incorrect conclusion, so repeating studies is important. Now, if I through cut back grant funding across the country, and create a hoard of desperate professors who are cranking through studies to find something to hang their hat on the next grant, and they do 20 studies and find 1 with a positive result--they publish. You end up with a bunch on non-reproducible studies: http://www.jove.com/blog/2012/05/03/studies-show-only-10-of-published-science-articles-are-reproducible-what-is-happening Parapsychology is only the tip of a far bigger crisis of how science is currently conducted, from funding to publication. What can you do? Don't buy into pop-news sensationalist claims from a single study. Start believing in it when multiple studies report the same result, and remain skeptical.
Kahneman's letter called for better reproducability of priming effects (observed when you ask one person if Mount Everest is taller or shorter than 50,000 feet, then ask them how tall Mount Everest is, then ask another person if Mount Everest is taller or shorter than 5,000 feet, then ask them how tall Mount Everest is, and the second person tends to give lower estimates). The request was enthusiastically answered by the Many Labs Replication Project, an international group of labs which revisited 13 classic experimental results. Ten effects were reproduced, with some of the anchoring effects (like the one about Mount Everest) demonstrated with stronger results than in the earlier studies. John Ioannidis, mentioned in the Economist article, has done heroic work exposing bad science in medicine. We recently had a discussion about a simple and effective technique that may be ignored because of the way medical research works. (P.S. Your link to the Economist article is broken because of the period at the end.)
I've seen the bullshit parapsychologists put forward as "evidence" before, and there is a really good reason it is held to more scrutiny than regular scientific studies are. I have yet to see a single study that is repeatable, has appropriate controls, and shows actual evidence for metaphysical or any form of parapsychology.
I think it's interesting that we'd rather invalidate a huge portion of studies which we so far considered sufficient than accept that there may be more to learn about the relationship between consciousness and "physical reality". Simply excluding phenomena from the realm of possibility doesn't seem very sciency to me. Not saying psi is real or anything, but the whole attitude seems strange.
I think you've misunderstood what's gone on here. The studies are considered "sufficient" only so far as they've passed the initial peer-review process; that is, they were looked over for types, basic statistical mistakes, and consistency in their own conclusions. That isn't an indication that their conclusions are correct at all. What science does next is they do replications, compare the conclusions to our larger framework of knowledge, then see what findings stick and what need more work or need to be scrapped. The rejection of these psi results isn't a divergence from typical standard practice, it's just what happens with all scientific research - it's just that the psi results don't stand up to scrutiny.
Peer review is much more than that. You're probably right in this case though since parapsychology papers tend to get published in crappy journals with low standards.I think you've misunderstood what's gone on here. The studies are considered "sufficient" only so far as they've passed the initial peer-review process; that is, they were looked over for types, basic statistical mistakes, and consistency in their own conclusions. That isn't an indication that their conclusions are correct at all.