As people who follow me on Twitter will know, I’ve recently been getting almost apopleptic about the way humanity — myself very much included — keeps making stupid, counterproductive, self-harming decisions, on every level from the tiniest decision to the most important. People are, for the most part, *really really really stupid* and oblivious to reality — and *EVEN WHEN YOU TAKE THAT INTO ACCOUNT* will still do really stupid stuff that’s easy for anyone (other than themselves) to see.
(And again, just to be clear, I am condemning myself as much as anyone else here. The day when I start to think of myself as actually being able to think straight is the day you’ll know there’s no hope left for me at all.)
Of course, some of this is because of broken incentive systems, lack of information, cognitive biases, and so on — but there’s something more to it than that. The way people *just can’t think properly* has been bugging me for some time. But now, I think I’ve at least found out *why* we’re all so stupid.
I was googling for an old Grant Morrison interview about Hypertime last night, clicked the wrong link, and found myself down a link-following rabbit hole which led me to this paper. And that explained *a lot*.
Traditionally, cognitive scientists have argued that we’ve evolved to perceive the world around us more or less correctly. The argument goes that you have a model in your head of how the world works, you take in new information from outside and update that model, and that process iterates til the model more-or-less matches the real world. (You can model this as updating using Bayes’ rule). The argument — which I always accepted — has been that this could have evolved quite simply. If such a thing as a tiger which eats people exists in the real world, and you have a model in your brain which says “there’s a tiger over there. Tigers eat people. Run away”, you’re more likely not to be eaten by a tiger, and so you’re more likely to survive and reproduce. So brains should evolve to create models of the world that contain tigers where there are real tigers, and which contain no tigers where there are none.
In the paper I linked, though — Natural selection and veridical perceptions by Justin T. Mark, Brian B. Marion, and Donald D. Hoffman — the authors make a different argument. They claim that what we see, what we perceive, is a model that may not actually represent anything that really exists.
They create a simple game-theoretical model of an environment, and put two simulated agents in it. One agent, “truth”, takes in all the information it can find before making a decision based on a complex, accurate, model of the world. Another, “simple”, takes in a tiny bit of information, and makes its decision based on a more simple, less accurate, model of the world.
If you put the two of those in the same environment, there are many cases where the simple model will outcompete the truth model into extinction. Not all cases, but as many as the other way round (it depends on how scarce resources are, how complex the environment is, and so on).
But then they add a third agent, “interface”. This agent, unlike the other two, *doesn’t have a model that represents what’s actually in the world at all*. It has a world model that lumps together wildly different things, based not on any categories they actually belong to.
To take our tiger analogy further, the “truth” model doesn’t make a decision until it sees that something’s a tiger. The “simple” model sees something cat-shaped, but doesn’t know if it’s a tiger that’s far away or a cat that’s close by before it makes its decision. This third agent sees a tiger and a bomb as the same thing and can’t distinguish between them. It creates a category of “threatening-type thing”, and sticks the tiger and the bomb both in there, and treats them the same way — it literally can’t tell the difference, and so it might decide to try to defuse the tiger or shoot the bomb with a sedative.
And in many of their tests — not all, but many of them — the interface model dominates both “truth” and “simple”, to the point that “truth” and “simple” become extinct and only “interface” survives.
Put simply, there is no consistent evolutionary pressure that produces actual intelligence. Intelligent reasoning and perception can be favoured sometimes, but more often evolution will produce a particular kind of stupidity — the kind that ignores details, lumps completely unrelated things together and literally can’t tell the difference, and makes snap decisions before waiting for the facts to actually be in. The kind that creates a model of how the world works that has only a limited connection to any actually existing reality, and which then doesn’t ever bother to update that model based on new information.
We probably haven’t evolved to see the world how it really is. Rather we’ve evolved to have prejudices based on no facts. We’ve evolved to keep doing the same thing over and over even though it always goes wrong. We’ve evolved, in fact, to have the kind of brain that says “well, I’m entitled to my opinion” whenever it’s confronted with its mistakes, and to carry on believing exactly the same thing it did before.
That’s not the only thing our brain has evolved to do — mixed-strategy players usually win in game theory over single-strategy ones, and so some of us can, sometimes, a little bit, occasionally learn from our mistakes and understand things that are actually happening in the world. Some of us can occasionally say “hang on, that makes no sense”, and even actually change our behaviour.
But we’ve not evolved to do that well, or to do it often. So when you think of the old SubGenius slogan “see how stupid the average person is? Well, by definition, half of them are more stupid than that!“, don’t think of it as condemning people.
The amazing thing isn’t how stupid people are. The amazing thing, given the evolutionary pressures, is that anyone, ever, does anything clever at all.
This blog post was brought to you by the generosity of my backers on Patreon. Why not join them?
So the social theorists that include intelligence as a part of ‘survival of the fittest’ are wrong. If I understood correctly from your article (which was very interesting by the way) then intelligence does not particularly enhance evolutionary reproduction. That seems like a bombshell statement; I have difficulty wrapping my mind around it.
Hi, I recommend you read just about anything on NLP, (regardless of whether you think it has any merit), The Heretics (Storr), and Predicable Irrationality…
Whilst I hesitate to approve of anything Scott Adams (of Dilbert) argues – partly because I honestly can’t tell if he’s attempting some extreme form of performance art in the extreme wingnuttery he promotes – one of his arguments that the core problems with the world is that it was designed by geniuses* but that it has to be lived in by the rest of us, seems similar. So yeah, the whole thing is a ludicrously improbably chain of accidents that still hasn’t ended, and how any of it happens at all is extraordinary.
*and he then adds a note to point out that those geniuses are still only geniuses in the one specific field, and are as bad as the rest of us when it comes to everything else.
I am reminded of the Arthur C. Clarke quote “It has yet to be proven that intelligence has any survival value”.
I’m not as cynical as that, though. As both a computer programmer and an evolutionary biologist, my immediate reaction is to be sceptical about the article you linked. The results of evolutionary simulations like this are very dependent on parameters, and it’s always possible to get whatever outcome you want by tweaking those parameters.
Apart from anything else, if the conclusion of Mark et al. (2010) is correct, it’s hard to see how anything like our present level of intelligence could have evolved,
What’s your favorite Grant Morrison interview, Andrew? I’m not only a big, big fan of Morrison’s work but I’ve also spent countless hours reading conversations with him. For example, just last week I found this relatively recent one:
http://boingboing.net/2017/04/17/__trashed-10.html
It’s a nice interview although it’s mostly about Grant’s interests in the esoteric (magic).
I can relate to him seeking new challenges with the editorship at Heavy Metal and his TV work because the latter’s potential rewards are so much greater than in comics but I very much miss my fix of monthly Morrisson!
Do you have any book-on-comics plans for the mid- to far future, Andrew? I bought enjoyed and all your books in that field.
My favourite interview with him is probably the one a couple of the Mindless Ones did, where he talked about how Mark Millar destroyed his faith in humanity ;)
I’ve been trying to get a book on Cerebus done for a couple of years now, but it’s not really working at the moment, though I’ll keep trying. But to an extent, I don’t plan my comics books the way I do some others — they mostly rely on me getting into a particular mental state (probably, though I’ve not been diagnosed, actual mania) and spewing out tens of thousands of words in almost no time at all. I sit down, look up, and realise I’ve accidentally a book again. This seems to happen about once every two years, so you can probably expect one this year as the last one was in May 2015, but it’s not something I can plan for.
Oh, yes the Mindless Ones interview is fun:
http://mindlessones.com/2011/06/30/grant-morrison-supergods-interview-transcript/
I actually enjoyed some of Mark Millar’s semi-recent projects, especially MPH. I wonder if it’s coincidence/a convenient plot device that at least two of them feature seemingly trusty companions who turn out to be enemies?
There was just one thing I didn’td like about your book on “Multiversity”: I would have liked it to be longer :-)
I’m already looking forward to your posts on Jack Kirby and — although I’m probably more interested in his fascinating biography than in his art — Elvis.
Thanks — it probably would have been longer, but I genuinely did write it in two days in the middle of an election campaign ;)
(This is what I mean about the books just sort of coming out)