Bullet-Biters And Bomb-Testers

Sometimes serendipity happens. I was trying to think of a way to link together a couple of sections of the Hyperpost book, when I found this old post from Scott Aaronson’s blog Shtetl-Optimised.

In it, Aaronson talks about how he’d noticed that there was a lot of overlap between Libertarians and proponents of the Many-Worlds Hypothesis in quantum physics, and had tried to figure out why:

Some connections are obvious: libertarianism and MWI are both grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments)…

My own hypothesis has to do with bullet-dodgers versus bullet-swallowers. A bullet-dodger is a person who says things like:

“Sure, obviously if you pursued that particular line of reasoning to an extreme, then you’d get such-and-such an absurd-seeming conclusion. But that very fact suggests that other forces might come into play that we don’t understand yet or haven’t accounted for. So let’s just make a mental note of it and move on.”

Faced with exactly the same situation, a bullet-swallower will exclaim:

“The entire world should follow the line of reasoning to precisely this extreme, and this is the conclusion, and if a ‘consensus of educated opinion’ finds it disagreeable or absurd, then so much the worse for educated opinion! Those who accept this are intellectual heroes; those who don’t are cowards.”

I think he’s on to something, but I think there’s a second aspect, which is what happens when those ideas actually hit reality.

Because Libertarianism and the Many Worlds Hypothesis have one big difference between them – one has immediate real-world consequences, and the other doesn’t. And that means that it is no longer a purely intellectual exercise.

Leaving aside whether the claims for Libertarianism (of the Ayn Rand type, which is what Aaronson is referring to) stack up logically, and assume for a moment one believes them to be correct, should you *act* as if you believe the claims to be correct? To take Aaronson’s example, should we privatise the fire service?

If you’re a libertarian, you believe the answer should be yes – that privatising the fire service would have the end result of fewer fires, and those fires being fought more cheaply. But what if you’re wrong? If you’re wrong, then the result would be people – potentially a lot of people – losing their homes.

So there’s a second level of calculation to be done here – how sure are you of your own reasoning ability and the information (your priors, in Bayesian terms) you use to come to your conclusions? *WHEN YOU FACTOR IN THE PROBABILITY OF YOU BEING WRONG* does the expected benefit if you’re right outweigh the expected loss if you’re wrong?

Now, on this blog I often fall into the ‘bullet biter’ side of things *when talking about ideas with no real-world immediate consequences*, because it’s both intellectually right and more interesting. But take the Many-Worlds hypothesis. I consider this the most likely of the various explanations of quantum theory I’ve read, and would put my confidence in that judgement at about 80% – I’m a bullet-biter there, and proud of it.

And I’m a bullet-biter when it comes to certain forms of alternative medicine. I’m convinced from the experimental evidence, for example, that taking certain vitamin supplements in large doses will massively decrease the risk of cancer, and have stated that on this blog too. And again, I’d put my confidence in that at about 80% (I rarely put my confidence in *anything* much above that).

Now, the downside with taking vitamins is that there’s a cost of maybe a pound a day and – if you believe the very worst possible reports, which as far as I can see have no evidentiary basis, but if we’re assuming I’m wrong we’re assuming I’m wrong – a very small increased risk of kidney stones. The benefit, if I’m right, is not getting cancer. An 80% chance of ‘not getting cancer’ outweighs a 20% chance of a 1% increase in kidney stones, so it’s worth the pound a day to me to put my money where my mouth is and actually take the vitamins.

On the other hand, one can come up with a real-world test for the Many-Worlds Hypothesis. If it’s true then, were I to stand at ground zero of a nuclear weapons test, I should expect to live through it. There would be a googolplex or so universes where I’d die instantly, but I would not experience those, because I’d die too quickly. On the other hand, there’d be a one-in-a-googolplex chance of me surviving, which according to Many-Worlds means there’s a universe where I *would* survive. That would be the only one I’d experience, so from my own point of view I’d survive.

But even though I am persuaded by the Many-Worlds hypothesis, I’m not going to try that one out.

However, there are people out there who *would* do it, who would say “No, I’ll be fine! Drop the bomb!” – let’s call them bomb-testers.

And I think while being a bullet-biter can be a good thing, being a bomb-tester never is.

A bullet-biter might say “I’m convinced the Singularity is coming, but I’ll give some money to Greenpeace just in case” while the bomb-tester would say “I’m convinced the Singularity is coming, so I’m not going to support environmental protection measures, because we’ll be gods in twenty years anyway”.
A bullet-biter might say “I’m convinced the Bible is literally true, but I’m not going to hurt anyone who thinks differently”. A bomb-tester would say “I’m convinced the Bible is literally true, so I’ll persecute homosexuals”

I think a lot of people – particularly in the ‘skeptic’ community – think of themselves as being bullet-biters when they’re actually bomb-testers. They’ve reached a logical conclusion, and are going to act on that and damn the consequences. This is why some people say Richard Dawkins and fundamentalist Christians are the same kind of person – not because their beliefs are equally unjustifiable, but because they are both certain enough of their own rightness that they’ll act on it even when the downside of that action looks to the rest of us far worse than whatever upside they believe in.

Which is not to say that “acting on one’s beliefs” is a bad thing. One reason I have more respect for Eliezer Yudkowsky (of Less Wrong ) than for other Signulatarians is that he’s willing to act on his beiefs (even though I don’t find his arguments convincing, and think he has more than a little of a Messianic streak at times). But his actions *take into account the possibility he’s wrong* – he’s acting in a way to minimise expected harm. If he’s right and he doesn’t act, the world will end. If he’s wrong and he does act, then he wastes his time and looks a fool. Were I to find his general arguments convincing, I’d be doing the same.

If you find yourself defending an intellectual position that others don’t hold, then you’re quite possibly an ‘intellectual hero’. But if you find yourself acting on that position without considering what might happen if you’re wrong, then you’ll end up a real-world villain…

This entry was posted in politics, religion, science and tagged , , , . Bookmark the permalink.

4 Responses to Bullet-Biters And Bomb-Testers

  1. Prankster says:

    Well done, you certainly put your finger on what’s been bothering me about a lot of the extremist opinions you hear nowadays, particularly the climate-change deniers. (Of course, a lot of those people would argue that they’re bullet-biters, but simply by encouraging people to do nothing they become bomb-testers…)

    Come to think of this, isn’t Pascal’s Wager a form of this argument as well? Except that the question of whether it’s got “real world consequences” depends, again, on which side of the argument you fall on.

  2. pillock says:

    Bertrand Russell, too…

    It’s a good match between libertarianism and MWI, really…because ultimately you can’t really test either of them, you can only commit to belief. Libertarianism may be as falsifiable as MWI ultimately is not, but try telling that to a libertarian! They’ll still cling to their Answer even if all the evidence goes against them…and we know they will because, well…they do, don’t they? So the only question there is how much one is required to care about the bodycount. A nice parallel with the dude who walks toward certain death, secure in the knowledge that there will always be another “him” out there for whom it was not certain! And so again it’s just how far one is to take the caring about bodycounts. And of course in both examples the secret is that to commit to this action means one has already given up caring, no matter what temporary lip-service gets paid.

    The smaller matters intrigue me more, though; at a certain point with things like the vitamins (by which I also mean medical stuff in general), you do have to commit at some point along the line…because no one expects medical science to sit still, and no one expects new discoveries to cease, and so if you know you’ve got a provisional belief going what can you do about it except to decide whether to bite its bullet or not? I mean health is a pretty immediate concern, and you can’t wait on Pascal’s Demon to reveal total knowledge to you…

    Interesting!

  3. TAD says:

    I have problems with the ManyWorlds hypothesis, mostly because my gut says it can’t be true. It sounds too sci-fi to me, and frankly, unbelievable.

    I think Hawking’s new book (which came out in September) talks about how it’s most likely that different sets of laws apply to the universe, depending on what part of the universe you’re studying. He says that there most likely isn’t a unifying theory of physics. Quantum physics applies to the subatomic world. Classic physics applies to the everyday world that we live in. And perhaps there are a 3rd set of laws to explain relationships inside a black hole, for example. In short, the laws of the universe depend on your perspective…..on where you happen to be at the time.

    • Andrew Hickey says:

      See, that sounds *more* ‘sci-fi’ to me, because that’s essentially saying that there’s no such thing as reality at all. It’s also logically inconsistent – because you’d need a set of meta-rules that state which set of rules apply where, and that set of meta-rules in itself is then a unified theory.

Comments are closed.