Sometimes serendipity happens. I was trying to think of a way to link together a couple of sections of the Hyperpost book, when I found this old post from Scott Aaronson’s blog Shtetl-Optimised.
In it, Aaronson talks about how he’d noticed that there was a lot of overlap between Libertarians and proponents of the Many-Worlds Hypothesis in quantum physics, and had tried to figure out why:
Some connections are obvious: libertarianism and MWI are both grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments)…
My own hypothesis has to do with bullet-dodgers versus bullet-swallowers. A bullet-dodger is a person who says things like:
“Sure, obviously if you pursued that particular line of reasoning to an extreme, then you’d get such-and-such an absurd-seeming conclusion. But that very fact suggests that other forces might come into play that we don’t understand yet or haven’t accounted for. So let’s just make a mental note of it and move on.”
Faced with exactly the same situation, a bullet-swallower will exclaim:
“The entire world should follow the line of reasoning to precisely this extreme, and this is the conclusion, and if a ‘consensus of educated opinion’ finds it disagreeable or absurd, then so much the worse for educated opinion! Those who accept this are intellectual heroes; those who don’t are cowards.”
I think he’s on to something, but I think there’s a second aspect, which is what happens when those ideas actually hit reality.
Because Libertarianism and the Many Worlds Hypothesis have one big difference between them – one has immediate real-world consequences, and the other doesn’t. And that means that it is no longer a purely intellectual exercise.
Leaving aside whether the claims for Libertarianism (of the Ayn Rand type, which is what Aaronson is referring to) stack up logically, and assume for a moment one believes them to be correct, should you *act* as if you believe the claims to be correct? To take Aaronson’s example, should we privatise the fire service?
If you’re a libertarian, you believe the answer should be yes – that privatising the fire service would have the end result of fewer fires, and those fires being fought more cheaply. But what if you’re wrong? If you’re wrong, then the result would be people – potentially a lot of people – losing their homes.
So there’s a second level of calculation to be done here – how sure are you of your own reasoning ability and the information (your priors, in Bayesian terms) you use to come to your conclusions? *WHEN YOU FACTOR IN THE PROBABILITY OF YOU BEING WRONG* does the expected benefit if you’re right outweigh the expected loss if you’re wrong?
Now, on this blog I often fall into the ‘bullet biter’ side of things *when talking about ideas with no real-world immediate consequences*, because it’s both intellectually right and more interesting. But take the Many-Worlds hypothesis. I consider this the most likely of the various explanations of quantum theory I’ve read, and would put my confidence in that judgement at about 80% – I’m a bullet-biter there, and proud of it.
And I’m a bullet-biter when it comes to certain forms of alternative medicine. I’m convinced from the experimental evidence, for example, that taking certain vitamin supplements in large doses will massively decrease the risk of cancer, and have stated that on this blog too. And again, I’d put my confidence in that at about 80% (I rarely put my confidence in *anything* much above that).
Now, the downside with taking vitamins is that there’s a cost of maybe a pound a day and – if you believe the very worst possible reports, which as far as I can see have no evidentiary basis, but if we’re assuming I’m wrong we’re assuming I’m wrong – a very small increased risk of kidney stones. The benefit, if I’m right, is not getting cancer. An 80% chance of ‘not getting cancer’ outweighs a 20% chance of a 1% increase in kidney stones, so it’s worth the pound a day to me to put my money where my mouth is and actually take the vitamins.
On the other hand, one can come up with a real-world test for the Many-Worlds Hypothesis. If it’s true then, were I to stand at ground zero of a nuclear weapons test, I should expect to live through it. There would be a googolplex or so universes where I’d die instantly, but I would not experience those, because I’d die too quickly. On the other hand, there’d be a one-in-a-googolplex chance of me surviving, which according to Many-Worlds means there’s a universe where I *would* survive. That would be the only one I’d experience, so from my own point of view I’d survive.
But even though I am persuaded by the Many-Worlds hypothesis, I’m not going to try that one out.
However, there are people out there who *would* do it, who would say “No, I’ll be fine! Drop the bomb!” – let’s call them bomb-testers.
And I think while being a bullet-biter can be a good thing, being a bomb-tester never is.
A bullet-biter might say “I’m convinced the Singularity is coming, but I’ll give some money to Greenpeace just in case” while the bomb-tester would say “I’m convinced the Singularity is coming, so I’m not going to support environmental protection measures, because we’ll be gods in twenty years anyway”.
A bullet-biter might say “I’m convinced the Bible is literally true, but I’m not going to hurt anyone who thinks differently”. A bomb-tester would say “I’m convinced the Bible is literally true, so I’ll persecute homosexuals”
I think a lot of people – particularly in the ‘skeptic’ community – think of themselves as being bullet-biters when they’re actually bomb-testers. They’ve reached a logical conclusion, and are going to act on that and damn the consequences. This is why some people say Richard Dawkins and fundamentalist Christians are the same kind of person – not because their beliefs are equally unjustifiable, but because they are both certain enough of their own rightness that they’ll act on it even when the downside of that action looks to the rest of us far worse than whatever upside they believe in.
Which is not to say that “acting on one’s beliefs” is a bad thing. One reason I have more respect for Eliezer Yudkowsky (of Less Wrong ) than for other Signulatarians is that he’s willing to act on his beiefs (even though I don’t find his arguments convincing, and think he has more than a little of a Messianic streak at times). But his actions *take into account the possibility he’s wrong* – he’s acting in a way to minimise expected harm. If he’s right and he doesn’t act, the world will end. If he’s wrong and he does act, then he wastes his time and looks a fool. Were I to find his general arguments convincing, I’d be doing the same.
If you find yourself defending an intellectual position that others don’t hold, then you’re quite possibly an ‘intellectual hero’. But if you find yourself acting on that position without considering what might happen if you’re wrong, then you’ll end up a real-world villain…