Geeks Dig Metaphors: Paradigm A Dozen

All work and no play makes Jack a dull boy, all work and no play makes Jack a dull boy, all work…

This series of posts has become rather longer than the very short thing I was originally going to write, but we’re heading into the home stretch now. (Parts one, two and three for latecomers.)

This post is the part that inspired the overall title for this mini-series, and is probably going to be the least convincing. But I find it the most convincing.

You see, in large part I agree with the Singulatarians, and that’s precisely why I disagree with them.

Let me explain.

Belief in the Singularity is part of what we might call a ‘paradigm’ or ‘meme-plex’ (depending on precisely what species of wanker we are), or a world-view. It’s one that, in its broadest outlines, I share, and it is that the universe can be regarded as pure information.

People arrive at this position – a sort of scientific neo-Platonism – from a variety of scientific sources, but you can get to it from proper computer science (see Scott Aaronson’s wonderful series of lectures on Quantum Computing Since Democritus), information theory, cybernetics, quantum theory via either the Copenhagen or Many-Worlds interpretations, Bayes’ theorem, Solomonoff induction or probably a dozen other ways. Almost all these fields, incidentally, come originally from work by John von Neumann…

In brief, this world-view could be summarised as:

  • Most of modern science is more-or-less correct. In particular, relativity, evolution and quantum physics are largely correct
  • It makes no sense to talk about things that are outside of the physical world, such as souls or gods, unless those things can be proved to exist by some effect they have on the physical world
  • Any physical system can be modelled by a Turing machine, given enough time and memory
  • Any two things which are isomorphic are the same (the identity of indiscernibles)
  • The scientific method – form a hypothesis, make a prediction from that hypothesis, test the prediction, revise the hypothesis in light of the results – is the only way of obtaining accurate information about the universe
  • The mind is a purely physical process
  • If you want a book explaining this viewpoint in great detail, I recommend David Deutsch’s The Fabric Of Reality (which I reviewed here )

    Now, most of this is stuff which is fairly sensible, and with which I (and I suspect most people) could agree. And it leads to the belief that both the universe and the human mind can be thought of in some sense as computer programs, or as mathematical formalisms.

    (Those of you who know a little of the history of philosophy will now get why I referred to the attitude of Singulatarians as Panglossian in the last post – Doctor Pangloss in Candide being of course a satire of Leibniz, whose ideas are very much a 17th century precursor to this worldview).

    At one extreme, this belief that the universe can be modelled as a computer program simply leads to things like Steve Yegge’s argument that we should treat questions like ‘what’s outside the universe?’ the same way we should treat an undef in programming. At the other, it leads to the ideas of mathematical physicist Max Tegmark, who argues that all mathematical formal systems have an objective reality in exactly the same way our universe does.

    This worldview does impact on the Singulatarians, in a variety of ways, from shaping their view of the end result of the Singularity, to their thoughts on how it should be created (a lot of the discussions around the Singularity Institute involve people trying to come up with a rigorous decision theory, based on Bayesian probabilities, that would work in a quantum multiverse, because they believe this to be necessary for the creation of an artificial intelligence that won’t harm humanity).

    But while this worldview is probably the closest we’ve got to a ‘correct understanding of the universe’ so far, it is only a model. And I think going from that model to statements that the mind ‘is’ a computer program, or that the universe ‘is’, is a step too far – confusing the map with the territory. Our models – our worldviews – are metaphors. They’re ways of understanding the universe. They’re not the actual universe itself, any more than Burns’ love really was a red red rose.

    Every other model we’ve had of the universe so far – the Aristotelean worldview, the clockwork universe of Newton and so on – has proved incorrect. Those models all worked for a restricted domain – those cases that could be understood and measured at the time, and that people had bothered to check. But it was the edge cases – those areas in which those worldviews were stretched to their limits – that caused those models to fall down.

    And every time, while the predictions made for things that were already known stayed the same (Aristotle, Newton and Einstein all predict that things will fall to the ground), the underlying view of the universe changed immeasurably, along with the predictions for the unknown.

    Our knowledge of science is immeasurably better now than, say, a hundred years ago, but it’s not yet complete. It may never be, but no matter what, things like a quantum theory of gravity, if we ever find one, *will* bring with them new ways of looking at the world, and I have no doubt that saying the universe is a computer program, or that the human mind is one, will look as ridiculous as saying that things move towards their natural place based on how much earth, air, fire or water they contain.

    The Singularity is, pretty much by definition, the place where our current thinking breaks down, even if you accept all the arguments for it. Now, either we’ve managed to get everything exactly right for the first time in history, and what’s more that getting everything exactly right will lead to immortality just before Ray Kurzweil would otherwise die, followed by the creation of heaven on Earth, or there’s a mistake in our current scientific thinking.

    I’d like to believe the former, but I’m not putting money on it…

    Geeks Dig Metaphors: The Politics Of The Singularity

    (Following on from the introduction and the technical problems )

    Now, the Singulatarian worldview can be summed up, roughly, as “Real Soon Now, we’re going to enter a Golden Age which will last forever. This Golden Age will probably be brought about by companies like Google, (with the help of geeks like me, and other people who can see how right I am!), so long as government doesn’t interfere with them, and is what the whole of humanity has been leading up to!”

    Now, that’s a dangerous message in itself – you’ve got a mythical Golden Age in the future to look forward to, support for unrestrained corporatism (so long as the corporations are working towards this Golden Age, or can appear as if they are) and a group of people (geeks) singled out as being better and more important than everyone else. Add in a scapegoat group to blame if everything goes wrong (I suggest Microsoft, if anyone’s wanting advice) and you’ve got all the recipes for fascism right there.

    Now, ever since John W Campbell there’s been a strong admixture of racism and boil-in-the-bag Nietzscheanism (Fans Are Slans!) in ‘geek culture’, along with a big chunk of groupthink and support for the big company over the individual (see most recently all the people having conniptions at the idea that there were people who weren’t going to go and see Scott Pilgrim on its opening weekend – these multi-billion dollar film corporations need your support, people, or they might stop making middle-brow high-concept comic adaptations! – as well as the frankly disgusting attitudes taken by comic fans every time a creator actually tries to assert any of their rights. ) That kind of thing is why I resist being referred to as a geek.

    But what’s more worrying is the Manifest Destiny aspect of this. Singulatarians (for the most part) believe this *has* to happen. Ray Kurzweil draws his straight lines, and they keep going on forever, so the Singularity *must* happen. Tipler is even firmer on this point – he argues that the Omega Point is a boundary condition for the wave function of the multiverse (this means it must happen by logical necessity, and if it didn’t the universe would cease ever to have existed). The Singularity is inevitable.

    Now, this kind of thinking is very popular in extremists of both left and right – come the Glorious Revolution, all will be right/the Invisible Hand of the market will fix everything. The attraction in both cases is that it allows the privileged not to feel bad about their privilege. If the Worldwide Dictatorship of the Proletariat *HAS* to happen, then there’s no point trying to make poor people’s lives any better now – in fact it might be a bad thing, because it’ll discourage the proletariat from realising their oppression and rising up. Best just buy a new TV rather than help the poor. And if you’re on the right, it’s even easier – you’ve got your money because that’s the most efficient possible allocation of those resources. Helping poor people would actually be *inefficient* and in the long run would hurt them! Best just buy a new TV…

    This is the natural political result of *any* kind of predestination, and explains why, for example, it was so easy for Christopher Hitchens to switch from being a Trotskyist SWP member to being an adviser to the Bush White House (in fact a huge number of neocons had previously been on the hard left).

    It also explains why the Singularity is so beloved of tech billionaires – they’ve become billionaires as a necessary step to the Golden Age, and there’s no need for them to give their money to the poor or anything like that, because the Singularity will raise *everyone* to their level! In fact by keeping their money, and investing in tech companies, they’re helping the poor far more than redistribution could! Of course, it helps that people like Kurzweil think the current set-up is just great – Kurzweil actually says, in his book, that he believes it will soon be possible for us to create machines that will literally make *any physical object you want* – program it to make a steak, or a perfect atom-level copy of the Mona Lisa, and it will. He thinks that it will be important to protect the intellectual property rights of those who write these programs!!!

    But it’s also a very, very dangerous attitude.

    Because in so far as Kurzweil’s lines going off to infinity, measuring information processing over time, have any value at all, they’re also graphs of energy use (there is essentially a linear relationship between the two). And energy use is a problem.

    There are a whole host of environmental and economic disasters that look set to hit over the next century or so – from overpopulation leading to massive food shortages, to global warming, to peak oil, to antibiotic-resistant bacteria, it is entirely possible that human civilisation as we know it will end in the next century. Even if you believe that one of these is low-probability or soluble, the combination seems to have a pretty high risk.

    But if you *know* – because you can draw a straight line – that all the world’s problems will be solved Real Soon Now – then you don’t need to do anything about these problems yourself, because it’ll all be fine.

    Not only that, but you’re not going to support any efforts by anyone else to mitigate these risks, because it’s a waste of resources. You won’t vote for politicians who want to fix these problems, because you don’t believe that the problems are real.

    (I am going to exempt Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence here. He sees the creation of a singularity of his favoured type as a way to avoid existential risk for humanity, and has decided to try to do this himself because he sees it as a moral duty to do something about it. He’s got an ego the size of the universe, some rather messianic beliefs about himself, and he hasn’t backed up his talk with any actual measurable action, but compared to the rest of these people he’s a model of sanity and clear-headedness, which is why I occasionally link his group blog Less Wrong here).

    The Singularity may well happen at some point – the Singulatarians may be right and I may be wrong. But even if it doesn’t, they’re right when the say that life in a hundred years will be unimaginably different from how it is today. The question is whether it will be unimaginably better or unimaginably worse. And that is going to be decided by the actions of every person alive today, and the decisions they make. If we manage to find solutions to our problems, we may well end up with something like the Singularity, eventually, but *we need to work toward the solutions first*.

    And for a bunch of rich, technically skilled people with access to the media, politicians and business leaders, to abrogate their responsibility to make those decisions and find those solutions, in favour of the worst kind of Panglossianism, is not only morally dubious but *dangerous* – in a very real sense they’re betting the earth that they’re right, and it’s not theirs to bet.

    Geeks Dig Metaphors: The Technical Problems With The Singularity

    Back to introduction

    I have come to the conclusion that anyone who talks about how easy it’s going to be to simulate a human brain in a computer either understands computers but doesn’t understand biology, or doesn’t understand computers but understands biology. I’m currently studying for a Master’s in Bioinformatics, so I have an equal lack of understanding of both subjects.

    The argument seems to be “the genome is like a computer program – it contains all the information needed to build a person. The genome’s only a few gigabytes long, so the Kolmogrov complexity of the ‘create a brain’ program must be less than that. We have computers that can run programs that long, so it’s only a matter of time before we can run a ‘create a brain’ program on our computers”.

    Now, firstly, I simply don’t believe that one can reduce the problem in this way. Intuitively, it doesn’t make much sense. I have a little over 20GB of Beach Boys MP3s/FLACs on my hard drive. They couldn’t be compressed much more than that without loss of information. The human brain is supposed to be the most complex known object in the universe. I simply don’t believe that the most complex known object in the universe has a lower Kolmogrov complexity than the surf-pop harmony stylings of the Wilson brothers. I mean, I’ve not even counted my Jan and Dean MP3s in there!

    But let’s ignore the intuitive problems, and also ignore various practical issues like epigenetic inheritance, and assume for the moment that the human genetic code is a computer program, and this 3GB (or “3/4 of the size of the directory which just contains my very favourite Beach Boys live bootlegs” in human terms) program will, if run on the correct hardware, produce a human body, including the brain. Here is where we hit the problem with the concept of Kolmogrov complexity, so freely bandied around by a lot of these people.

    Basically, Kolmogrov complexity is a measure of how small the smallest computer program that can produce a given output is. For example, say we want to run a program that outputs “Hello World!” and a line break. In Perl (the language with which I’m most familiar) this would be:

    print “Hello World!\n”;

    That’s 39 bytes long. This means that we know the Kolmogrov complexity of a Hello, World program must be 39 bytes or less. It might be possible to do it in fewer bytes in some other programming language, but we know that any program more than 39 bytes long isn’t the shortest possible program that does that.

    Now, the reason Kolmogrov complexity is a useful measure is that it doesn’t vary *much* between languages and platforms. Say you have a program written in perl, but for some reason you want to run it in Java. ‘All’ you need to do is wrap it in another program, which converts perl to Java, so if your ‘convert perl to Java’ program is, say, 1.2 megabytes (that’s the size of the /usr/bin/perl program on my GNU/Linux system, which converts perl to machine code, so that’s a reasonable size), the length of the shortest Java program to do that thing must be at most the length of the perl program plus 1.2 megabytes.

    As program size gets bigger, that ‘plus 1.2 megabytes’ gets swamped by the size of the program, so Kolmogrov complexity is a very good measure of the complexity of *getting a particular computer to perform a task*.

    But the problem is that it doesn’t take into account the complexity of *the hardware performing the task*, so it’s not very good when moving between vastly different types of hardware.

    Take a jukebox as an example. If you want it to play, say, Good Vibrations by the Beach Boys, you look for the code for that (say, A11) and punch that into the jukebox, which executes that ‘program’. Now, that proves that the Kolmogrov complexity of the ‘play Good Vibrations’ program is a couple of bytes long.

    But if I want my computer to play Good Vibrations, the simplest program that will do it is ‘playsound /media/disk/Beach\ Boys/Smiley\ Smile/Good\ Vibrations.mp3’ – that’s thirty-five times the length of the jukebox ‘program’. But that’s not all – you have to count the size of the ‘playsound’ program (15 kilobytes) and the MP3 file (3.8 megabytes). Moving our ‘program’ from the jukebox to my computer has made it several million times as long, because we’ve had to take information that was previously in hardware (the physical Beach Boys CD within the jukebox and the ability of the jukebox to play music) and convert it into software (the MP3 file and the playsound program).

    Now, I never normally talk about my day job here, because I don’t want to give anyone an excuse to confuse my views with those of my employers, but it’s almost impossible not to, here. The lab in which I work produces a piece of software which allows you to run programs compiled for one kind of computer on another kind. The product I work on allows you to take programs which are compiled for computers with x86 processors (such as, almost certainly, the one you’re using to read this) that run GNU/Linux, and run them on machines which have POWER chips, which also run GNU/Linux.

    Now, this program took many person-decades of work by some very, very bright people, and a huge amount of money, to develop. It’s a very complex, sophisticated piece of software. Every time even something relatively small is changed, it has to go through a huge battery of tests because something put in to make, say, Java run faster might make, for example, the Apache web server break. (This is lucky for me as it means I have a job). Even given this, it’s still not perfect – programs run slower on it than they would on an x86 box (sometimes not very much slower, but usually at least a bit slower), and there are some programs that can’t work properly with it (not many, but some). It’s astonishingly good at what it does, but what it does is, by necessity, limited. (To the extent that, for example, the programs still have to be for the same GNU/Linux distro the POWER machine is running – you can’t use it to run, for example, a Red Hat 5 program on a Red Hat 4 box).

    Now, this hugely complex, sophisticated, expensive-to-create program converts from one chip type to another. But both chips are Von Neumann architectures. Both use the same peripheral devices, and the same interfaces to those devices. Both are designed by human beings. And the people writing that program have access to information about the design of both types of chip, and can test their program by running the same program on an x86 box and on a POWER box with their program and seeing what the result is.

    Now, when it comes to the ‘program’ that is the genetic code, none of that’s true. In this case, the hardware+operating system is the cell in which the genetic code is embedded, plus the womb in which that cell gets embedded, the umbilical cord that brings it nutrients, the systems that keep the mother’s body temperature regulated, the hormone levels that change at different times… basically, instead of having two chips, both of which you can examine, and leaving everything else the same and trying to get your three gig program to run (which I know from experience is in itself a massive problem), you have to simulate an entire human being (or near as dammit) in software in order to run the genetic code program – which we’re running, remember, *in order to simulate part of a human being!*

    And you have to do that with no access to source code, with no way of testing like-for-like (unless there are women who are lining up to be impregnated with randomly-altered genetic material to see what happens), and with the knowledge that the thing you’re creating isn’t just a computer program, but at least potentially a sentient being, so a coding error isn’t going to just cause someone to lose a day’s work because of a crash, but it’ll give this sentient being some kind of hideous genetic illness.

    And what are you left with, at the end of that effort? A baby. One that can’t interact with the physical world, and that is many, many times slower than a real baby. And that will undoubtedly have bugs in (any computer program longer than about thirty lines has bugs in). And that requires huge amounts of energy to run.

    I can think of more fun, more reliable ways of making a baby, should I happen to want one.

    But the point is, that even that would be a phenomenal, incredible achievement. It would dwarf the Manhattan Project and the moon landings and the Human Genome Project. It would require billions in funding, thousands of people working on it, many decades of work, and several huge conceptual breakthroughs in both computer science and biology.

    Which is not to say that it’s impossible. I’ve never seen a good argument against it being theoretically possible to create artificial general intelligence, and I’ve not seen any convincing ones against the possibility of uploading and emulating a particular person’s brain state. And assuming that technology continues to improve and civilisation doesn’t collapse, it may well happen one day. But people like Kurzweil arguing that the relatively small size of the genome makes it a trivial problem, one that will be solved in the next couple of decades, are like those people who drew graphs in 1950 showing that if top speeds achieved by humanity carried on increasing we’d be at the speed of light in the 1990s. The first example of that I saw was in Heinlein’s essay Pandora’s Box. The retro-futurology page has an examination of some of the other predictions Heinlein made in that essay. Suffice it to say, he didn’t do well. And Heinlein was far more intelligent and knowledgeable than Kurzweil.

    And of course, hidden in that paragraph above is a huge assumption – “assuming that technology continues to improve and civilisation doesn’t collapse”. It’s that, in part, that I want to talk about in the next part of this, coming up in a few hours.

    Geeks Dig Metaphors: Introduction

    A metaphor is a glorious thing,
    A diamond ring,
    The first day of summer
    A metaphor is a breath of fresh air,
    A turn-on,
    An aphrodisiac

    Chicks dig, dig, d-i-g, dig, dig metaphors,

    Metaphors are scraps of resemblance that tie us down. Ted Nelson

    Let’s talk about the Singularity.

    The Singularity is the closest thing to a religion for a large number of people who self-identify as ‘geeks’ at the moment. It’s also an idea which has been widely publicised in the last couple of weeks with the Singularity Summit having acheived some mainstream media coverage. The Singularity also *matters*, because it’s something that is believed in, to a greater or lesser extent, by a large number of tech billionaires – people who have actual influence and power.

    The Singularity is a term taken from a science fiction story, and has a variety of meanings – almost one for every ‘Singulatarian’, but roughly speaking they all believe that “The Unity of Nanoscience, Biotechnology, Information Technology and Cognitive Science” (to borrow from the title of a book I’m currently reading ) will lead to humanity becoming gods.

    From Saturday Morning Breakfast Cereal by Zach Weiner

    From Saturday Morning Breakfast Cereal by Zach Weiner

    That’s not an exaggeration. The basic belief of Singulatarians is that at some point in the future, we will have enough knowledge of biology, computing and nanotech that we can make ourselves immortal, upload our minds to tiny nanotech computers which will then take over the entire universe, turning it into computronium. We would then become immortal omniscient gods in a virtual reality indistinguishable from the real thing, except that it would be geared entirely to our personal happiness. There is another theory which states that this has already happened.

    The timescale of this is, of course, disputed. At one end is Ray Kurzweil, stupid person and author of the moronic The Singularity Is Near, who argues due to his ability to draw a straight line that by the year 2010 “computers will disappear as distinct physical objects, with displays built in our eyeglasses and electronics woven into our clothing” (three months to go. I hope I can afford new glasses and clothes before Christmas) and that we will all become immortal gods within his lifetime. Kurzweil is regarded as one of the greatest futurists of our time, and billionaire technologists fawn on his every word.

    At the other extreme is eccentric right-wing astrophysicist Frank Tipler, who also thinks this will happen, but believes it will take about fifteen billion years for us to become immortal, omniscient and omnipotent. Also, while Kurzweil bases his argument on drawing lines (and not even very accurate lines – I’m not usually a particular fan of P.Z. Myers, but I was very pleased to see that blog post from last year, because it saved me having to write the exact same thing), Tipler’s argument (in his fascinating The Physics Of Immortality (apparently out of print) and his frankly barking Physics of Christianity, but first put forward in The Anthropic Cosmological Principle , which is also the most sensible explanation) is based on known physics and makes testable predictions. However, unlike Kurzweil, Tipler also identifies the end result of his speculations with the Christian God, so he’s regarded as a crank, rather than an eminent futurist. (He *is* a crank, as anyone who reads his recent work, like this paper arguing that Obama is evil because of the existence of the luminiferous aether and the arguments put forward against abortion by the film Starship Troopers, will recognise, but he’s less so than Kurzweil). Tipler also argues that his Singularity will be an actual physical singularity – the universe will be condensed to a single point he calls the Omega Point.

    Now, these ideas are actually inordinately appealing, to me at least. I like the idea of becoming an immortal, omniscient superintelligent superbeing living in a universe which exists solely for my pleasure. That sounds like a good thing to me. And I’m probably far more likely to want the Singularity to be true than the average person, because I’m not especially attached to my body. I’m balding, asthmatic, dyspraxic, overweight, have migraines and mild psoriasis, and have various sensory issues, the combination of which means that most of the time the closest i get to physical pleasure is ‘in less discomfort’. Upload my mind into a computer capable of processing text and music, and you’ve kept 95% of my joys in life while losing 100% of the unpleasantness. That seems like a good deal to me – especially if that computer can then simulate my mind in any body I want.

    Unfortunately, I see three problems with the Singulatarian view – one technical, one poilitical, and one philosophical. All of these are surmountable, but there seems to be little evidence that any of them *will* be surmounted – or that any Singulatarians even consider these to be real problems.

    This post is getting much, much longer than I thought it would, so I’m breaking it into several sections. I’m still writing the other sections as I post this, but they’ll be posted as I get them done. I won’t be replying to comments til they’re all complete.

    Linkblogging For 26/08/10

    This month’s Superman/Batman has a good bit in it. I know, I’m as surprised as you. But here it is – The Joker and Lex Luthor as Calvin and Hobbes

    I don’t know if I’ve linked to him before, but Ben Gunn is a prisoner who’s spent thirty years in prison for a murder he committed when he was a teenager. He renounced all violence, decades ago, and is now the only blogging prisoner in the UK, and is also studying for a PhD in criminology. He needs money to pay his fees this year. As someone who finds it hard enough to pay for my own Master’s degree while working, I have a huge amount of sympathy for someone trying to study with no source of income at all, and would urge anyone who can afford it to help him.

    John Harris is one of the first sensible Labour supporters I’ve seen talking about the Lib Dems post-coalition.

    James Ward has tried to change his name to James Ward

    And hating cliches and special pleading is clearly the last acceptable prejudice

    Doctor Who From The Beginning: 5 – The Keys Of Marinus

    The Keys Of Marinus
    Writer: Terry Nation
    Director: John Gorrie
    DVD availability: single-disc DVD Buy from Amazon

    This one’s taken a *long* time to get around to writing about, hasn’t it? Partly that’s because my DVD player broke, but also it’s because of the nature of this story. While it might have worked well as a serial, particularly for the eight-year-old demographic at which it’s clearly aimed, watching it all in one go (which is how I’ve been doing these) is like trying to eat ten dry cream crackers. When you start you think “this will be easy!”, but after a few minutes you’re thinking “Why on earth did I think this was even possible? I want to die!”

    The Keys Of Marinus is the first bad Doctor Who story. Why on earth this was saved when Marco Polo wasn’t is something we shall never know. The performances – at least those of the main cast – are as good as ever and the production design is *astonishing*, but there’s a gigantic hole in the middle of the script.

    All the previous stories have been *about* something, or several somethings – An Unearthly Child was about future shock and the generation gap, The Daleks was about fascism, Edge Of Destruction was meant to show us more about the characters and bring their relationship forward, and Marco Polo was, by children’s TV standards, a pretty decent stab at showing how life was lived in another continent and another century.

    But The Keys Of Marinus isn’t about anything at all except filling up six twenty-five minute TV slots between the Telegoons and Juke Box Jury. It’s the first example of someone trying to write ‘a Doctor Who story’ – it’s the first of many, many attempts to recapture the Daleks’ success, but even though it’s written by Terry Nation it comprehensively misses everything that made them a success. While the Daleks were hideous inhuman tank-robot-monster-aliens with zap guns, Yartek, leader of the alien Voord, is a bloke in a wetsuit.

    It also has a rather… confused… morality. An ancient scientist kidnaps the TARDIS crew and holds them to ransom until they retrieve the pieces of his giant mind-control machine that, until someone found a way round it, was controlling the minds of every person on the planet. He’s the goodie.

    But that doesn’t matter, because this isn’t really a story at all. It’s a ‘quest’, where Our Heroes have to find all the pieces of the MacGuffin, splitting up into groups and adventuring in several different ‘exciting’ locations, against various ‘scary’ foes of the psychic-brain-in-jar variety, before finally getting together to defeat the big baddie. In other words it’s the sort of plotting that had previously been a staple of Republic serials and bad superhero team-up comics, but is these days better known as video-game plotting.

    In other words, this is the first example of Doctor Who being written by a lazy hack. And just how much of a lazy hack Nation is being here can be seen by the place names – Marinus (a watery planet) and Morphoton (where you’re dreaming while you’re there) must have taken whole *seconds* to think up. We also have those Terry Nation staples “plants.. that are more like animals!” (unless you’ve watched a lot of Nation’s work you’ve no idea how tedious ‘scary’ jungles can get), casual sexism (both Barbara and Susan want nothing more in the universe than a nice dress, while the Doctor wants a well-equipped lab), and a heroic character with a name like Terry Nation who saves the day. (In this case the character is called Tarran, but in future we have magical substances callled Tarranium, heroes called Tarrant and, most blatantly of all, a sexy super-spy called Sara Kingdom).

    Terry Nation could, when he wanted, be a very good writer. Unfortunately, he only wanted to three times in his many years on Doctor Who – the first two Dalek stories and Genesis Of The Daleks (and it’s very debatable how much of that story he actually wrote). The rest of the time he was the absolute definition of a hack.

    While reviewing the previous stories, I’ve found myself straining against my self-imposed 1000-word limit – they all have many points of interest, good lines, well-composed shots or *something*. In this case you have a bunch of good actors and an excellent set designer doing their best with terrible material, and there are only so many synonyms for ‘not very good’ you can come up with.

    Doctor Who has always been a children’s programme, but the stories before this one all pretty consistently refused to use that as an excuse to be bad TV. But this is just unbelievably lazy writing, and the story is only redeemed at all because the cast haven’t yet realised that it’s OK to give a sub-par performance when handed a sub-par script. But even there, one of the chief joys of early Who is watching William Hartnell’s extraordinary performance, and he’s off on holiday for two weeks here, when the team split up.

    This wouldn’t seem so bad in another context – to be honest, the script isn’t *MUCH* worse than the standard of, say, series three, much of the Troughton era, mid-period Pertwee or whatever. Whenever the programme makers get lazy and think ‘will this do?’, The Keys Of Marinus seems to be the standard to which they sink. But here it’s placed during an otherwise impeccable run of classic stories. This is the only unarguably *bad* story in the show’s first year (one can argue about The Sensorites, and we will in a couple of weeks, no doubt), and while it’s easy to see what they were trying to do, it’s amazing that this story didn’t kill off the ‘rubber-suited monster’ genre of Doctor Who for good.

    Luckily, next up came one of the best stories the show’s ever produced…

    The Alternative Vote system

    In a little under nine months, the British people will be voting on changing our voting system from First Past The Post (FPTP) to the Alternative Vote system (AV).

    The Liberal Democrats want to go much further and have the Single Transferable Vote system (STV), while the Tories hate the idea and want to keep things as they are. Labour put AV in their manifesto, but are fighting it now because that’s what Labour do.

    So it’s a compromise.

    But it’s a compromise that solves one of the two main problems with our voting system, and will make it easier to solve the other one, so I would urge you to vote yes in the referendum.

    There are two main areas where our voting system is unfair. The first is proportionality, which AV does little to address. In the last election the Tories got an MP for every 35,000 people who voted for them, Labour got one for every 33,000, the Lib Dems one for every 120,000 and the Greens one for every million or so. That’s not fair, and should be changed, but unfortunately while the Lib Dems and Greens want to change that, Labour and the Tories don’t. I wonder why?

    But there is another aspect which is equally unfair, and that is preferentiality. First Past The Post, our current system, is a winner-takes-all system. But it may well be the case that the majority don’t support the winner. Imagine a case where you have three parties – the Evil Bastard Party, the Quite Nice Party and the Very Nice Party. In a constituency, 34% of people vote for the Evil Bastard candidate, 33% for the Quite Nice candidate and 33% for the Very Nice candidate. The Evil Bastard candidate then wins – even though the Quite Nice supporters would rather have the Very Nice candidate than the Evil Bastard, while the Very Nice people would rather the Quite Nice candidate. The vast majority of people are then unhappy with ‘their’ MP, who represents ‘them’.

    This kind of thing does happen – my friend Dave often uses the example of Hazel Blears, the Labour MP for Salford and Eccles, who is horribly unpopular. It’s probably fair to say that in that constituency most Lib Dems would have preferred the Tory candidate to her, and most Tories would have preferred the Lib Dem (and the Socialist Worker candidate stood as TUSC/Hazel Must Go!). So that constituency has an MP who 60% of the voters wanted out.

    The Alternative Vote fixes that.

    How It Works
    Everyone is given a ballot on which is listed all the candidates who are standing, The voter then ranks them in order. If one candidate gets more than 50% of the first-preference votes, that candidate is the winner. Otherwise, the lowest-scoring candidate is knocked out, and the second-preference votes from them go to the other candidates. This carries on until one candidate has more than 50% of the votes. This means that whoever wins, more than half the voters think they’re not the worst alternative.

    An example – imagine we have four parties (Red, Blue, Yellow and Green) and nine voters who vote as follows:

    Voter 1 Yellow Green Blue Red
    Voter 2 Yellow Blue Red Green
    Voter 3 Red Blue Yellow Green
    Voter 4 Red Blue Green Yellow
    Voter 5 Yellow Green Blue Red
    Voter 6 Red Green Blue Yellow
    Voter 7 Blue Green Yellow Red
    Voter 8 Blue Red Green Yellow
    Voter 9 Green Red Yellow Blue

    Round 1 – We have 3 Yellow, 3 Red, 2 Blue and 1 Green first preferences. Green is eliminated as it has the fewest first preference votes, and the votes redistributed:

    Voter 1 Yellow Blue Red
    Voter 2 Yellow Blue Red
    Voter 3 Red Blue Yellow
    Voter 4 Red Blue Yellow
    Voter 5 Yellow Blue Red
    Voter 6 Red Blue Yellow
    Voter 7 Blue Yellow Red
    Voter 8 Blue Red Yellow
    Voter 9 Red Yellow Blue

    Round 2 – We have 4 Red, 3 Yellow and 2 Blue , so Blue are eliminated

    Voter 1 Yellow Red
    Voter 2 Yellow Red
    Voter 3 Red Yellow
    Voter 4 Red Yellow
    Voter 5 Yellow Red
    Voter 6 Red Yellow
    Voter 7 Yellow Red
    Voter 8 Red Yellow
    Voter 9 Red Yellow

    We now have 5 Red votes, which is more than 50%, so Red wins

    Advantages Of The System
    The principal advantage of this system is that there is no longer any such thing as a ‘wasted vote’, and allows people to vote *honestly*. The VAST majority of people in this country, in my experience, don’t vote so much out of support for one party but to keep the other lot out. This is one reason, for example, why so many people are screaming ‘betrayal!’ at the formation of the coalition. Many people supported Labour, but because Labour couldn’t win in their seat, they voted Lib Dem to ‘keep the Tories out’, rather than because they actually supported us.

    But of course this works every way – there are Tories who vote Lib Dem to keep Labour out and Lib Dems who vote Labour or Tory to keep the other party out. (There don’t seem to be many Labour or Tory supporters who vote for the other big party to keep the Lib Dems out, at the moment, but there probably will be in any future FPTP elections). And there are many, many supporters of smaller parties who know their party hasn’t a chance, so vote for the least-worst option.

    But if we have AV, at the next election you don’t have to hold your nose. You put the party you actually support in first place, and then if the Legalise Cannabis And Criminalise Sodomy Party (or whatever tiny fringe party most closely matches your views) doesn’t come first, you haven’t ‘wasted’ your vote, and haven’t failed to keep the party you hate most out.

    It also has a number of other advantages:

    It brings increased representation for smaller parties, but would still keep out rabid extremists. My guess is that it would lead to more Lib Dem MPs, a couple more Greens, and possibly one or two from some of the hard left parties (especially if the various fringe parties co-ordinate their efforts like in the last election, where RESPECT and the Greens worked together). On the other hand, no party that was *hated* by the majority could get any seats, so AV would actually make it *less* likely that the Bastard Nazi Party would get in. (Other far-right extremists, like Racist UKIP, might get a seat or two, but that’s a small price to pay for greater democracy).

    It would also mean that a lot of the negative campaigning – ranging from “X Can’t Win Here!” (because now they can) through to personal abuse against candidates – would have to stop. Currently if you’re a Labour politician in, say, a Labour/Conservative marginal, it doesn’t matter if you alienate every Lib Dem supporter by saying “people who vote Lib Dem worship Satan and think Jo Brand is the funniest one on QI” because you’re trying to persuade people not to vote Lib Dem. Under AV, you want them to give you a high second preference, so you’d be more likely to say “I have the greatest respect for my Lib Dem opponent, and urge my voters to give her their second preference” in the hopes that she’d say the same about you.

    It would get rid of many safe seats – at the moment, Hazel Blears and her ilk are immune, because unless everyone who doesn’t want her as MP rallies round a single candidate, she gets in by default. Now, so long as she’s the least popular option, out she goes.

    And for those who are unhappy with the coalition, it helps send parties a message. If, at the last election, the majority of Lib Dem supporters had put Labour second, and the majority of Labour supporters had put the Lib Dems second, then both parties would have a very strong incentive to work together, knowing that would be what their supporters wanted. On the other hand if the majority of Lib Dem voters had put the Tories second, then it would mean that the Lib Dems would have a clear answer to the cries of ‘betrayal!’

    The only disadvantage I can see – and it’s quite a big one – is that AV is not proportional. But then neither is our current system – and a preferential non-proportional system is better than a proportional non-preferential system like the horrible D’Hondt system we use in the European elections. The system the Lib Dems want – and that I think is the best myself – is called STV (or the British Proportional System), and is both proportional *and* preferential. But the interesting thing is that STV and AV are essentially the same system, except you merge several constituencies together and then have the top few candidates become MPs, rather than just the top one. That means that if AV goes through, it would be pretty trivial to change to STV in the future if enough people want that (and since AV would probably lead to increased representation for parties which want a proportional system, that change might happen in say ten or fifteen years).

    So while AV isn’t my favourite system, it *is* my *second-favourite* system, and I’d rather have my second favourite than my most-hated. If you would too, vote “Yes” in May 2011.