• Why the Minority isn’t Always Correct

    In almost any discussion about science where there is a mainstream viewpoint and a contrarian viewpoint,  someone on the contrarian side will invariable bring up Galileo. Usually stating that the majority of people thought one way, Galileo thought a different way, he was prosecuted for his thoughts, and then it turns out he was correct.

    Yes, that is all 100% correct. However, there’s one thing that is rarely mentioned. Galileo had evidence on his side and the majority did not.

    A recent paper shed some interesting light on the contrarian research into global warming.

    I’m going to try a new way of reviewing peer-reviewed research and your opinions will be appreciated. Here, we will examine the abstract in detail, using support from the paper and other sources. I do this, because the abstract is frequently freely available and the thing that most non-scientists read anyway. But in most cases, the methods and data that inform the abstract are more important and rarely mentioned in the abstract.

    The title of this paper is “Learning from mistakes in climate research“[1].

    Among papers stating a position on anthropogenic global warming (AGW), 97 % endorse AGW. What is happening with the 2%of papers that reject AGW? We examine a selection of papers rejecting AGW. An analytical tool has been developed to replicate and test the results and methods used in these studies; our replication reveals a number of methodological flaws, and a pattern of common mistakes emerges that is not visible when looking at single isolated cases. Thus, real-life scientific disputes in some cases can be resolved, and we can learn from mistakes. A common denominator seems to be missing contextual information or ignoring information that does not fit the conclusions, be it other relevant work or related geophysical data. In many cases, shortcomings
    are due to insufficient model evaluation, leading to results that are not universally valid but rather are an artifact of a particular experimental setup. Other typical weaknesses include false dichotomies, inappropriate statistical methods, or basing conclusions on misconceived or incomplete physics. We also argue that science is never settled and that both mainstream and contrarian papers must be subject to sustained scrutiny. The merit of replication is highlighted and we discuss
    how the quality of the scientific literature may benefit from replication.

    So let’s see what we have in store for us.

    Among papers stating a position on anthropogenic global warming (AGW), 97 % endorse AGW. What is happening with the 2%of papers that reject AGW?

    This is a well known issue. The climate debate rages on in the media and politics, but in science, it’s pretty much a settled issue. It is known that many of the authors of contrarian papers are being employed by the fossil fuel industry as well… so there’s that to consider.

    The 97% figure has been confirmed by 3 studies; Oreskes 2004, Anderegg, 2010, and Cook 2013. Another project (not peer-reviewed) found even less support for the rejection of human-caused global warming. Several attempts have been made to discredit the studies, but these attempts do not actually discuss the survey of climate scientists or the papers themselves. They (as we see in a lot of evolution discussions) are more concerned with the definitions than the data. Keep in mind we’re talking about over 14,000 abstracts and 1200 surveys from the authors of those papers.

    We examine a selection of papers rejecting AGW. An analytical tool has been developed to replicate and test the results and methods used in these studies; our replication reveals a number of methodological flaws, and a pattern of common mistakes emerges that is not visible when looking at single isolated cases.

    And here’s the key bit right here. By examining the contrarian articles, in detail, common issues were found. Think back to the review I did on the Seralini study on rats fed GMO corn. That paper was flawed in multiple ways and eventually retracted. Likewise, the papers of the global warming deniers also seem to be flawed in multiple ways.

    The authors here examined the contrarian papers and grouped them into categories based on how the papers’ conclusions were drawn. The authors then looked for patterns and comparisons between the conclusions and any problems in the papers themselves.

    Let me be clear, when examining a scientific paper, one needs to be an expert or something very close to it. As I mentioned in the Seralini review, there are a lot of mistakes that are very, very subtle and unless you are looking for them and aware of how all of the pieces of a scientific study fit together, then they are easy to miss.

    For example, in the Seralini paper, he started with rats that are specifically bred to get tumors after two years. If you didn’t know that about the rat type, you’d miss that mistake.

    Thus, real-life scientific disputes in some cases can be resolved, and we can learn from mistakes.

    I love this line. This is one of the greatest things I’ve ever seen in a peer-reviewed paper.

    However, the authors here make a mistake. They assume that people want to be correct. That they want to write papers that use correct science and give valid conclusions based on real evidence. As we saw above, that is not the case. Some people have made careers out of the peddling lies to anyone who will listen… especially those with deep pockets full of money. I cite here the above mentioned Dr. Soon, the fellows and authors at the Discovery Institute, Dr. Andrew Wakefield, and many others who put personal profit above truth.

    Still, this is the ideal and if we can teach regular people about this, then these charlatans will have less influence over them.

    A common denominator seems to be missing contextual information or ignoring information that does not fit the conclusions, be it other relevant work or related geophysical data. In many cases, shortcomings are due to insufficient model evaluation, leading to results that are not universally valid but rather are an artifact of a particular experimental setup. Other typical weaknesses include false dichotomies, inappropriate statistical methods, or basing conclusions on misconceived or incomplete physics.

    I wanted to list the discrepancies discovered in these papers.

    • improper hypothesis testing
    • incorrect statistics
    • a neglect of contextual information, such as relevant literature or other evidence at variance with their conclusions.
    • Several papers also ignored relevant physical interdependencies and consistencies.
    • There was also a typical pattern of insufficient model evaluation, where papers failed to compare models against independent values not used for model development (out-of-sample tests).
    • Insufficient model evaluation is related to over-fitting, where a model involves enough tunable parameters to provide a good fit regardless of the model skill. Another term for over-fitting is Bcurve fitting, and several such cases involved wavelets, multiple regression, or long-term persistence null models for trend testing.
    • More stringent evaluation would suggest that the results yielded by several papers on our list would fail in a more general context. Such evaluation should also include tests for selfconsistency or applying the methods to synthetic data for which we already know the answer.
    • False dichotomy was also a common theme, for example, when it is claimed that the sun is the cause of global warming, leaving no room for GHGs even though in reality the two forcings may coexist.
    • In some cases, preprocessing of the data emphasized certain features, leading to logical fallacies.
    • Other issues involved ignoring tests with negative outcomes (cherry picking) or assuming untested presumed dependencies; in these cases, proper evaluation may reduce the risk of such shortcomings.
    • Misrepresentation of statistics leads to incorrect conclusions, and contamination by external factors caused the data to represent aspects other than those under investigation.
    • The failure to account for the actual degrees of freedom also resulted in incorrect estimation of the confidence interval.
    • One common factor of contrarian papers included speculations about cycles, and the papers reviewed here reported a wide range of periodicities.
    • Spectral methods tend to find cycles, whether they are real or not, and it is no surprise that a number of periodicities appear when carrying out such analyses.
    • Several papers presented implausible or incomplete physics, and some studies claimed celestial influences but suffered from a lack of clear physical reasoning: in particular, papers claiming to report climate dependence to the solar cycle length (SCL). Conclusions with weak physics basis must still be regarded as speculative.

    That’s quite a list of problems. It is across many papers though. Let me let the authors speak a little further on this.

    Perhaps the most common problem with the cases examined here was missing contextual information (the prosecutor’s fallacy (Wheelan 2013)), and there are several plausible explanations for why relevant information may be neglected. The most obvious explanation is that the authors were unaware of such facts. It takes experts to make proper assessments, as it requires scientific skills, an appreciation of both context and theory, and hands-on experience with computer coding and data analysis.

    I mentioned this almost continuously in my review of the anti-evolution book Darwin’s Doubt. The ignorance or purposeful rejection of papers that discredit an author’s claim is not good science.

    When I was doing this work, I was told, to definitely report those papers, then be prepared to explain why they don’t apply or why the original author is wrong… in detail. Ignoring them just makes it appear one has something to hide.

    The authors want to report on another problem that I see in multiple fields.

    There were also a group of papers (Gerlich and Tscheuschner 2009; Lu 2013; Scafetta 2013) that were published in journals whose target topics were remote from climate research. Editors for these journals may not know of suitable reviewers and may assign reviewers who are not peers within the same scientific field and who do not have the background knowledge needed to carry out a proper review.

    In my own work, I reported on the engineer who promoted an ID paper examining fossils in a architecture and design journal. Of course, the journal also posted a disclaimer about the article.

    Continuing with the final bit of the abstract.

    We also argue that science is never settled and that both mainstream and contrarian papers must be subject to sustained scrutiny. The merit of replication is highlighted and we discuss how the quality of the scientific literature may benefit from replication.

    This. A thousand times this. Science is not settled. However, like evolution and GMOs and vaccines and gravity and Relativity, it would take a major upheaval to show that all the previous work is wrong, evidence that a new idea is correct and it perfectly explains everything seen up to that point with the same accuracy as the original idea.

    Think of it like this, Newton’s ideas about motion were shown to be wrong by Einstein’s Relativity. However, in the vast majority of the situations, Newton’s equations are still useful. This is because the difference between Newton’s equations and Einstein’s equations (in most situations) is so small as to be almost unmeasurable. That’s the kind of revolution we’re talking about here.

    What’s really interesting is that these revolutions have occurred in evolution. The introduction of Evolutionary Developmental Biology, for example.

    There’s another issue that I want to bring up which applies to these contrarian papers (in climate change and evolution discussions). That is that many authors are so busy trying to discredit the current mainstream position that they don’t actually have a position of their own. We see this a lot from ID proponents. They will absolutely refuse to engage with another ID proponent who says something that they disagree with. Likewise with climate change.

    For example, here’s one discovered by the authors of skeptical science.

    These kinds of flip-flops are common on Anthony Watts’ blog, which had a very schizophrenic six month period:

    And that’s when he’s not arguing that the surface temperature record is so contaminated that we don’t even know if the planet is warming.  Or that this supposedly unreliable data shows cooling.

    That’s just one example of the climate change denial process. I could list dozens more flip-flops just like it. One cannot argue that increasing carbon dioxide is good for the plant life of the planet and, at the same time, argue that humans are too insignificant to cause global change. Yet, I’ve seen this same argument multiple times.

    In conclusion, we see that the minority opinion against current mainstream scientific ideas is often heavily flawed.

    While that does not mean that the mainstream scientific opinion is correct, it also does not mean that any contrarian viewpoints are correct.

    What does make the mainstream science viewpoint correct is evidence, lots and lots of evidence. Thousands of papers with different variables, different models, and different methods all supporting the same idea. Climate change is happening and caused by humans. Evolution happens and common descent is correct. Vaccines are safe. GMOs are safe.

    _________________________________

    [1] Apologies for my lack of normal citation, my PC has been in ICU at the Microsoft Store since the Windows 10 install process destroyed it. Of course, a moron at that store further complicated the issue by, in two clicks, destroying my entire data drive which was originally fine.

     

    References

    Anderegg WRL, Prall JW, Harold J, Schneider SH (2010) Expert credibility in climate change. PNAS 107:12107–12110

    Cook J et al (2013) Quantifying the consensus on anthropogenic global warming in the scientific literature. Environ Res Lett 8:024024

    Oreskes N (2004) The scientific consensus on climate change. Science 306

    Category: BiologyClimatologyEvolutionfeaturedGMOResearchScienceSkepticism

    Tags:

    Article by: Smilodon's Retreat