As people who follow me on Twitter will know, I’ve recently been getting almost apopleptic about the way humanity — myself very much included — keeps making stupid, counterproductive, self-harming decisions, on every level from the tiniest decision to the most important. People are, for the most part, *really really really stupid* and oblivious to reality — and *EVEN WHEN YOU TAKE THAT INTO ACCOUNT* will still do really stupid stuff that’s easy for anyone (other than themselves) to see.
(And again, just to be clear, I am condemning myself as much as anyone else here. The day when I start to think of myself as actually being able to think straight is the day you’ll know there’s no hope left for me at all.)
Of course, some of this is because of broken incentive systems, lack of information, cognitive biases, and so on — but there’s something more to it than that. The way people *just can’t think properly* has been bugging me for some time. But now, I think I’ve at least found out *why* we’re all so stupid.
I was googling for an old Grant Morrison interview about Hypertime last night, clicked the wrong link, and found myself down a link-following rabbit hole which led me to this paper. And that explained *a lot*.
Traditionally, cognitive scientists have argued that we’ve evolved to perceive the world around us more or less correctly. The argument goes that you have a model in your head of how the world works, you take in new information from outside and update that model, and that process iterates til the model more-or-less matches the real world. (You can model this as updating using Bayes’ rule). The argument — which I always accepted — has been that this could have evolved quite simply. If such a thing as a tiger which eats people exists in the real world, and you have a model in your brain which says “there’s a tiger over there. Tigers eat people. Run away”, you’re more likely not to be eaten by a tiger, and so you’re more likely to survive and reproduce. So brains should evolve to create models of the world that contain tigers where there are real tigers, and which contain no tigers where there are none.
In the paper I linked, though — Natural selection and veridical perceptions by Justin T. Mark, Brian B. Marion, and Donald D. Hoffman — the authors make a different argument. They claim that what we see, what we perceive, is a model that may not actually represent anything that really exists.
They create a simple game-theoretical model of an environment, and put two simulated agents in it. One agent, “truth”, takes in all the information it can find before making a decision based on a complex, accurate, model of the world. Another, “simple”, takes in a tiny bit of information, and makes its decision based on a more simple, less accurate, model of the world.
If you put the two of those in the same environment, there are many cases where the simple model will outcompete the truth model into extinction. Not all cases, but as many as the other way round (it depends on how scarce resources are, how complex the environment is, and so on).
But then they add a third agent, “interface”. This agent, unlike the other two, *doesn’t have a model that represents what’s actually in the world at all*. It has a world model that lumps together wildly different things, based not on any categories they actually belong to.
To take our tiger analogy further, the “truth” model doesn’t make a decision until it sees that something’s a tiger. The “simple” model sees something cat-shaped, but doesn’t know if it’s a tiger that’s far away or a cat that’s close by before it makes its decision. This third agent sees a tiger and a bomb as the same thing and can’t distinguish between them. It creates a category of “threatening-type thing”, and sticks the tiger and the bomb both in there, and treats them the same way — it literally can’t tell the difference, and so it might decide to try to defuse the tiger or shoot the bomb with a sedative.
And in many of their tests — not all, but many of them — the interface model dominates both “truth” and “simple”, to the point that “truth” and “simple” become extinct and only “interface” survives.
Put simply, there is no consistent evolutionary pressure that produces actual intelligence. Intelligent reasoning and perception can be favoured sometimes, but more often evolution will produce a particular kind of stupidity — the kind that ignores details, lumps completely unrelated things together and literally can’t tell the difference, and makes snap decisions before waiting for the facts to actually be in. The kind that creates a model of how the world works that has only a limited connection to any actually existing reality, and which then doesn’t ever bother to update that model based on new information.
We probably haven’t evolved to see the world how it really is. Rather we’ve evolved to have prejudices based on no facts. We’ve evolved to keep doing the same thing over and over even though it always goes wrong. We’ve evolved, in fact, to have the kind of brain that says “well, I’m entitled to my opinion” whenever it’s confronted with its mistakes, and to carry on believing exactly the same thing it did before.
That’s not the only thing our brain has evolved to do — mixed-strategy players usually win in game theory over single-strategy ones, and so some of us can, sometimes, a little bit, occasionally learn from our mistakes and understand things that are actually happening in the world. Some of us can occasionally say “hang on, that makes no sense”, and even actually change our behaviour.
But we’ve not evolved to do that well, or to do it often. So when you think of the old SubGenius slogan “see how stupid the average person is? Well, by definition, half of them are more stupid than that!“, don’t think of it as condemning people.
The amazing thing isn’t how stupid people are. The amazing thing, given the evolutionary pressures, is that anyone, ever, does anything clever at all.
This blog post was brought to you by the generosity of my backers on Patreon. Why not join them?