You can read Part 1 of this series here and Part 2 here.
Why agnosticism is irrational
You need to understand two things to see how agnosticism is fundamentally irrational. The first is cognitive scale blindness, our human inability to parse phenomena or objects with values that vary too greatly. The second is closely related: what do we mean by true ? The former will inform on how to think about the plausibility value of god and the latter, what word really corresponds to that value.
Cognitive Scale Blindness
In the previous posts I have argued that agnosticism is a kind of special pleading, and in one comment I referred to “cognitive scale blindness”. I postulate that cognitive scale blindness is why rather smart people like Richard Dawkins get it wrong: it’s a sort of cognitive illusion. What is scale blindness? Let’s try a thought experiment.
Imagine a ping pong ball. You can think of it resting in your palm. How many could you hold? Perhaps 8 or 12 without elaborate stacking, you might guess and you’d be close to correct. How many could fit in a gumball machine? Here you might guess 40 or 80 or even 100 depending on how large a machine you’ve envisioned. You probably got it a bit off, but not by too much. Reasoning about these objects and answering my silly questions is hardly challenging.
Next, I ask how many ping pong balls would fit in an elevator. Now your guess is harder, and likely far less accurate. How about the number of such balls to displace the volume of the Golden Gate bridge? How about North America? Planet Earth? The Sun? The Solar system? The Milky Way? The local galactic cluster? What if I asked you to compare (sans mathematical calculations) these various quantities, could you in any sense tender an intelligible answer? Or would you just be totally guessing?
Our brains evolved to deal with scales and quantities necessary to our adaptive problems and the ecological stage we’ve lived on. We have no intuitive grasp of super-massive scales. Our language reflects this truth as well. What words would you use to compare the size of the ping pong ball and gumball machine? “It’s bigger”. OK, then how do you describe (relative to the same ball) the size of a bridge? “Much bigger” maybe? You can add modifiers but they lose their meaning quickly as you try to cover the differences between a ping pong ball and a planet, star, solar system, etc.., We simply don’t have words that can uniquely describe these relationships.
We’ve invented scales called logarithmic scales which reduce massive state spaces to small numbers that we can understand. Familiar examples include the Richter Magnitude Scale and the decibel scale. We need them, too. A given earthquake could have released energy equivalent to 480 grams of dynamite. Another might have the force of 240 trillion grams. These are a bit hard to intuitively compare, which is why on the Richter scale, these are 0.5 and 8.8 magnitudes respectively. People can readily grasp 6.6 versus 7.5, but have a much harder time trying to chew on millions versus billions or trillions of tonnes of TNT.
Sizes of large objects are often described in terms of familiar objects people have experienced. This is also to avoid large numbers which mean nothing to people: Jupiter’s Great Red Spot is the size of 2-3 Earths; The average aircraft carrier is as long as 4 football fields. Would it be more meaningful to tell you the Great Red Spot is 40,000 kilometers wide, or that an average carrier is 1092 feet? Probably not.
I’ve been discussing physical phenomena such as objects and energy, which have a range of values so vast we have no intuitive ability to handle them. We should predict people will have trouble with any state space which can be populated by salient items varying from each other by enormous amounts. There are at least two non-physical state spaces; one is the number line itself, another is the perceived plausibility of constructs.
The Plausibility Scale
The things that humans are capable of imagining are probably innumerable. They can vary in character nearly or literally infinitely, as well. This has no doubt been important to our evolution. You can’t invent a new tool without being able to imagine something which you’ve never seen in the world, something which might be possible or might not be. Since there are an infinite number of wrong ideas, and you don’t have an eternity to waste, our brains should come equipped with rough, but functional plausibility gauges to assign some likelihood to ideas. This faculty can be imperfect, as long as it tends to prevent us from wasting our time on beliefs or pursuits which can be ruled out with available evidence, inference, or prior knowledge.
So it is, we feel that some notions are wrong and we similarly feel that some notions are true. What does true mean, exactly? We know that it isn’t absolute truth, even at a cognitive level, because the plausibility feeling is just a filter that could only work probabilistically, at best. Moreover, people can change their minds even about things they used to feel certain of. We know from the previous section that we’re built to reason about a relatively simple & local scale of state spaces. We also know that since there are infinitely varying notions, there is an infinite expanse of potential plausibility values that those notions might have.
We can then expect our minds to carve the plausibility state space into a handful of groups, the same way we think of objects’ sizes: minuscule, tiny, small, slight, large, massive, colossal. We should expect that this scale only really describes notions at plausibility levels close to those that matter to humans. This is to say, those which correspond to probability levels which are relevant to our choices. It would make no real difference to you whether the weatherman said that today the chance of rain is 1/1000 or 1/10000000: you’re not bothering with the umbrella. This table illustrates how our intuitions might carve up a state space of infinitely varying values into useful bits that correspond to our implicit understanding of veracity. (and I’m calling it the Sagan Plausibility Scale).
As the index increases, the plausibility drops. Note that it expands downward infinitely without ever reaching any absolute end, just as physical sizes spiral up beyond human concern or native understanding. The scale is also logarithmic in nature, even though plausibility isn’t mathematically determinate, of course. Somewhere between 2 and 4 is the bullshit line of demarcation (LOD). When an idea has evidence or backing in reason, it gets bumped above the LOD- this is the case even if it fails to reach 1.0. Ideas that require many dubious contingencies, seem to self-contradict, to contradict existing knowledge, or face other forms of counter-evidence get pushed downward below the LOD.
Our concepts about veracity and the words we use to represent them can now be precisely defined. Words like true, right, correct, valid, truth, factual, and reality all describe words above the LOD with stronger terms for concepts further above the line. Concepts near the line are described by terms (and associated mental confidence levels) such as possible, perhaps, maybe, might, could, etc.., Finally, concepts below the LOD are nonsense, false, untrue, invalid, wrong, incorrect, nonsense, bullshit, etc.., and again concepts further away from the LOD get the stronger terms. NOTE: I am not assigning these values; I am observing the correspondence between perceived plausibility and concepts/words that people actually reflect in their thinking. This perfectly matches the poverty of words for physical size scale, as one idea can be “nonsense” based on it being a 6.0 on the scale, but a much more implausible notion, say occupying 30 has no language to describe it, it’s still just “nonsense” the same way that California is “large” and Jupiter is “large”.
I’ve labelled the God hypothesis 67 on the scale. This is arbitrary; perhaps it should be 6.4 or 39.4. None of this makes any difference, because as long as it’s below the LOD, the word and concept we use to describe it is the same: nonsense. Agnosticism is therefore irrational because it corresponds to some value well below the LOD. The specific value is irrelevant. You may object that the God hypothesis is in the 2-4. In the next part I will present evidence that it is not. For now, let us turn back to Dawkins’ own scale.
Dawk’s Eye View
In his book polemic The God Delusion, Richard Dawkins admits to his agnosticism “in practice”. He provides the following scale to describe belief about God.
- Strong theist. 100 per cent probability of God. In the words of C. G. Jung, ‘I do not believe, I know.’
- Very high probability but short of 100 per cent. De facto theist. ‘I cannot know for certain, but I strongly believe in God and live my life on the assumption that he is there.’
- Higher than 50 per cent but not very high. Technically agnostic but leaning towards theism. ‘I am very uncertain, but I am inclined to believe in God.‘
- Exactly 50 per cent. Completely impartial agnostic. ‘God‘s existence and non-existence are exactly equiprobable.’
- Lower than 50 per cent but not very low. Technically agnostic but leaning towards atheism. ‘I don’t know whether God exists but I’m inclined to be sceptical.’
- Very low probability, but short of zero. De facto atheist. ‘I cannot know for certain but I think God is very improbable, and I live my life on the assumption that he is not there.’
- Strong atheist. ‘I know there is no God, with the same conviction as Jung “knows” there is one.’
By now, I hope the psychological and philosophical errors Dawkins (and almost everyone) tends to make are obvious. First, his scale is linear. 4 is just as far from 1 as it is from 7. As I have shown, a % is not an appropriate way to describe a measure of plausibility, just as we don’t use a percentage scale for decibels or earthquakes; that would do injustice to the description and ignore key information. Second, it reads as if there is a difference between 6 and 7. There literally is no difference. 6 is described as low, “but short of zero” but as we’ve seen all notions are short of zero. 1 and 7 are short of zero, too, which is why people are quite commonly known to convert and de-convert from and to religion. Third, Dawkins fails to justify the variance between how he (or people generally) describe beliefs about scientific ideas and God ideas. About evolution Dawkins says,
You cannot be both sane and well educated and disbelieve in evolution. The evidence is so strong that any sane person has got to believe in evolution.
and
It is absolutely safe to say that, if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid, or insane.
I agree. He’s right. Of course, Richard Dawkins knows that since a trickster deity can’t be ruled out, there is a nonzero chance evolution is wrong, right? Even so, he feels confident enough to declare evolution to be, simply put, true. Absolution is neither relevant nor necessary for him to have that conviction of truth. If that can be true about evolutionary theory, then it can be true about atheism, too. You protest, though, evolution is a science with solid evidence and atheism has none! You’re wrong. In the next installment I will speak to that point. But I will let Richard himself tell you just how I will do it, by way of talking about how biologists did the same for evolution: