Sci-Ence! Justice Leak!

Geeks Dig Metaphors: Paradigm A Dozen

Posted in religion, science by Andrew Hickey on August 30, 2010

All work and no play makes Jack a dull boy, all work and no play makes Jack a dull boy, all work…

This series of posts has become rather longer than the very short thing I was originally going to write, but we’re heading into the home stretch now. (Parts one, two and three for latecomers.)

This post is the part that inspired the overall title for this mini-series, and is probably going to be the least convincing. But I find it the most convincing.

You see, in large part I agree with the Singulatarians, and that’s precisely why I disagree with them.

Let me explain.

Belief in the Singularity is part of what we might call a ‘paradigm’ or ‘meme-plex’ (depending on precisely what species of wanker we are), or a world-view. It’s one that, in its broadest outlines, I share, and it is that the universe can be regarded as pure information.

People arrive at this position – a sort of scientific neo-Platonism – from a variety of scientific sources, but you can get to it from proper computer science (see Scott Aaronson’s wonderful series of lectures on Quantum Computing Since Democritus), information theory, cybernetics, quantum theory via either the Copenhagen or Many-Worlds interpretations, Bayes’ theorem, Solomonoff induction or probably a dozen other ways. Almost all these fields, incidentally, come originally from work by John von Neumann…

In brief, this world-view could be summarised as:

  • Most of modern science is more-or-less correct. In particular, relativity, evolution and quantum physics are largely correct
  • It makes no sense to talk about things that are outside of the physical world, such as souls or gods, unless those things can be proved to exist by some effect they have on the physical world
  • Any physical system can be modelled by a Turing machine, given enough time and memory
  • Any two things which are isomorphic are the same (the identity of indiscernibles)
  • The scientific method – form a hypothesis, make a prediction from that hypothesis, test the prediction, revise the hypothesis in light of the results – is the only way of obtaining accurate information about the universe
  • The mind is a purely physical process
  • If you want a book explaining this viewpoint in great detail, I recommend David Deutsch’s The Fabric Of Reality (which I reviewed here )

    Now, most of this is stuff which is fairly sensible, and with which I (and I suspect most people) could agree. And it leads to the belief that both the universe and the human mind can be thought of in some sense as computer programs, or as mathematical formalisms.

    (Those of you who know a little of the history of philosophy will now get why I referred to the attitude of Singulatarians as Panglossian in the last post – Doctor Pangloss in Candide being of course a satire of Leibniz, whose ideas are very much a 17th century precursor to this worldview).

    At one extreme, this belief that the universe can be modelled as a computer program simply leads to things like Steve Yegge’s argument that we should treat questions like ‘what’s outside the universe?’ the same way we should treat an undef in programming. At the other, it leads to the ideas of mathematical physicist Max Tegmark, who argues that all mathematical formal systems have an objective reality in exactly the same way our universe does.

    This worldview does impact on the Singulatarians, in a variety of ways, from shaping their view of the end result of the Singularity, to their thoughts on how it should be created (a lot of the discussions around the Singularity Institute involve people trying to come up with a rigorous decision theory, based on Bayesian probabilities, that would work in a quantum multiverse, because they believe this to be necessary for the creation of an artificial intelligence that won’t harm humanity).

    But while this worldview is probably the closest we’ve got to a ‘correct understanding of the universe’ so far, it is only a model. And I think going from that model to statements that the mind ‘is’ a computer program, or that the universe ‘is’, is a step too far – confusing the map with the territory. Our models – our worldviews – are metaphors. They’re ways of understanding the universe. They’re not the actual universe itself, any more than Burns’ love really was a red red rose.

    Every other model we’ve had of the universe so far – the Aristotelean worldview, the clockwork universe of Newton and so on – has proved incorrect. Those models all worked for a restricted domain – those cases that could be understood and measured at the time, and that people had bothered to check. But it was the edge cases – those areas in which those worldviews were stretched to their limits – that caused those models to fall down.

    And every time, while the predictions made for things that were already known stayed the same (Aristotle, Newton and Einstein all predict that things will fall to the ground), the underlying view of the universe changed immeasurably, along with the predictions for the unknown.

    Our knowledge of science is immeasurably better now than, say, a hundred years ago, but it’s not yet complete. It may never be, but no matter what, things like a quantum theory of gravity, if we ever find one, *will* bring with them new ways of looking at the world, and I have no doubt that saying the universe is a computer program, or that the human mind is one, will look as ridiculous as saying that things move towards their natural place based on how much earth, air, fire or water they contain.

    The Singularity is, pretty much by definition, the place where our current thinking breaks down, even if you accept all the arguments for it. Now, either we’ve managed to get everything exactly right for the first time in history, and what’s more that getting everything exactly right will lead to immortality just before Ray Kurzweil would otherwise die, followed by the creation of heaven on Earth, or there’s a mistake in our current scientific thinking.

    I’d like to believe the former, but I’m not putting money on it…

    Geeks Dig Metaphors: The Politics Of The Singularity

    Posted in Uncategorized by Andrew Hickey on August 30, 2010

    (Following on from the introduction and the technical problems )

    Now, the Singulatarian worldview can be summed up, roughly, as “Real Soon Now, we’re going to enter a Golden Age which will last forever. This Golden Age will probably be brought about by companies like Google, (with the help of geeks like me, and other people who can see how right I am!), so long as government doesn’t interfere with them, and is what the whole of humanity has been leading up to!”

    Now, that’s a dangerous message in itself – you’ve got a mythical Golden Age in the future to look forward to, support for unrestrained corporatism (so long as the corporations are working towards this Golden Age, or can appear as if they are) and a group of people (geeks) singled out as being better and more important than everyone else. Add in a scapegoat group to blame if everything goes wrong (I suggest Microsoft, if anyone’s wanting advice) and you’ve got all the recipes for fascism right there.

    Now, ever since John W Campbell there’s been a strong admixture of racism and boil-in-the-bag Nietzscheanism (Fans Are Slans!) in ‘geek culture’, along with a big chunk of groupthink and support for the big company over the individual (see most recently all the people having conniptions at the idea that there were people who weren’t going to go and see Scott Pilgrim on its opening weekend – these multi-billion dollar film corporations need your support, people, or they might stop making middle-brow high-concept comic adaptations! – as well as the frankly disgusting attitudes taken by comic fans every time a creator actually tries to assert any of their rights. ) That kind of thing is why I resist being referred to as a geek.

    But what’s more worrying is the Manifest Destiny aspect of this. Singulatarians (for the most part) believe this *has* to happen. Ray Kurzweil draws his straight lines, and they keep going on forever, so the Singularity *must* happen. Tipler is even firmer on this point – he argues that the Omega Point is a boundary condition for the wave function of the multiverse (this means it must happen by logical necessity, and if it didn’t the universe would cease ever to have existed). The Singularity is inevitable.

    Now, this kind of thinking is very popular in extremists of both left and right – come the Glorious Revolution, all will be right/the Invisible Hand of the market will fix everything. The attraction in both cases is that it allows the privileged not to feel bad about their privilege. If the Worldwide Dictatorship of the Proletariat *HAS* to happen, then there’s no point trying to make poor people’s lives any better now – in fact it might be a bad thing, because it’ll discourage the proletariat from realising their oppression and rising up. Best just buy a new TV rather than help the poor. And if you’re on the right, it’s even easier – you’ve got your money because that’s the most efficient possible allocation of those resources. Helping poor people would actually be *inefficient* and in the long run would hurt them! Best just buy a new TV…

    This is the natural political result of *any* kind of predestination, and explains why, for example, it was so easy for Christopher Hitchens to switch from being a Trotskyist SWP member to being an adviser to the Bush White House (in fact a huge number of neocons had previously been on the hard left).

    It also explains why the Singularity is so beloved of tech billionaires – they’ve become billionaires as a necessary step to the Golden Age, and there’s no need for them to give their money to the poor or anything like that, because the Singularity will raise *everyone* to their level! In fact by keeping their money, and investing in tech companies, they’re helping the poor far more than redistribution could! Of course, it helps that people like Kurzweil think the current set-up is just great – Kurzweil actually says, in his book, that he believes it will soon be possible for us to create machines that will literally make *any physical object you want* – program it to make a steak, or a perfect atom-level copy of the Mona Lisa, and it will. He thinks that it will be important to protect the intellectual property rights of those who write these programs!!!

    But it’s also a very, very dangerous attitude.

    Because in so far as Kurzweil’s lines going off to infinity, measuring information processing over time, have any value at all, they’re also graphs of energy use (there is essentially a linear relationship between the two). And energy use is a problem.

    There are a whole host of environmental and economic disasters that look set to hit over the next century or so – from overpopulation leading to massive food shortages, to global warming, to peak oil, to antibiotic-resistant bacteria, it is entirely possible that human civilisation as we know it will end in the next century. Even if you believe that one of these is low-probability or soluble, the combination seems to have a pretty high risk.

    But if you *know* – because you can draw a straight line – that all the world’s problems will be solved Real Soon Now – then you don’t need to do anything about these problems yourself, because it’ll all be fine.

    Not only that, but you’re not going to support any efforts by anyone else to mitigate these risks, because it’s a waste of resources. You won’t vote for politicians who want to fix these problems, because you don’t believe that the problems are real.

    (I am going to exempt Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence here. He sees the creation of a singularity of his favoured type as a way to avoid existential risk for humanity, and has decided to try to do this himself because he sees it as a moral duty to do something about it. He’s got an ego the size of the universe, some rather messianic beliefs about himself, and he hasn’t backed up his talk with any actual measurable action, but compared to the rest of these people he’s a model of sanity and clear-headedness, which is why I occasionally link his group blog Less Wrong here).

    The Singularity may well happen at some point – the Singulatarians may be right and I may be wrong. But even if it doesn’t, they’re right when the say that life in a hundred years will be unimaginably different from how it is today. The question is whether it will be unimaginably better or unimaginably worse. And that is going to be decided by the actions of every person alive today, and the decisions they make. If we manage to find solutions to our problems, we may well end up with something like the Singularity, eventually, but *we need to work toward the solutions first*.

    And for a bunch of rich, technically skilled people with access to the media, politicians and business leaders, to abrogate their responsibility to make those decisions and find those solutions, in favour of the worst kind of Panglossianism, is not only morally dubious but *dangerous* – in a very real sense they’re betting the earth that they’re right, and it’s not theirs to bet.

    Geeks Dig Metaphors: The Technical Problems With The Singularity

    Posted in computing, science by Andrew Hickey on August 30, 2010

    Back to introduction

    I have come to the conclusion that anyone who talks about how easy it’s going to be to simulate a human brain in a computer either understands computers but doesn’t understand biology, or doesn’t understand computers but understands biology. I’m currently studying for a Master’s in Bioinformatics, so I have an equal lack of understanding of both subjects.

    The argument seems to be “the genome is like a computer program – it contains all the information needed to build a person. The genome’s only a few gigabytes long, so the Kolmogrov complexity of the ‘create a brain’ program must be less than that. We have computers that can run programs that long, so it’s only a matter of time before we can run a ‘create a brain’ program on our computers”.

    Now, firstly, I simply don’t believe that one can reduce the problem in this way. Intuitively, it doesn’t make much sense. I have a little over 20GB of Beach Boys MP3s/FLACs on my hard drive. They couldn’t be compressed much more than that without loss of information. The human brain is supposed to be the most complex known object in the universe. I simply don’t believe that the most complex known object in the universe has a lower Kolmogrov complexity than the surf-pop harmony stylings of the Wilson brothers. I mean, I’ve not even counted my Jan and Dean MP3s in there!

    But let’s ignore the intuitive problems, and also ignore various practical issues like epigenetic inheritance, and assume for the moment that the human genetic code is a computer program, and this 3GB (or “3/4 of the size of the directory which just contains my very favourite Beach Boys live bootlegs” in human terms) program will, if run on the correct hardware, produce a human body, including the brain. Here is where we hit the problem with the concept of Kolmogrov complexity, so freely bandied around by a lot of these people.

    Basically, Kolmogrov complexity is a measure of how small the smallest computer program that can produce a given output is. For example, say we want to run a program that outputs “Hello World!” and a line break. In Perl (the language with which I’m most familiar) this would be:

    #!/usr/bin/perl
    print “Hello World!\n”;

    That’s 39 bytes long. This means that we know the Kolmogrov complexity of a Hello, World program must be 39 bytes or less. It might be possible to do it in fewer bytes in some other programming language, but we know that any program more than 39 bytes long isn’t the shortest possible program that does that.

    Now, the reason Kolmogrov complexity is a useful measure is that it doesn’t vary *much* between languages and platforms. Say you have a program written in perl, but for some reason you want to run it in Java. ‘All’ you need to do is wrap it in another program, which converts perl to Java, so if your ‘convert perl to Java’ program is, say, 1.2 megabytes (that’s the size of the /usr/bin/perl program on my GNU/Linux system, which converts perl to machine code, so that’s a reasonable size), the length of the shortest Java program to do that thing must be at most the length of the perl program plus 1.2 megabytes.

    As program size gets bigger, that ‘plus 1.2 megabytes’ gets swamped by the size of the program, so Kolmogrov complexity is a very good measure of the complexity of *getting a particular computer to perform a task*.

    But the problem is that it doesn’t take into account the complexity of *the hardware performing the task*, so it’s not very good when moving between vastly different types of hardware.

    Take a jukebox as an example. If you want it to play, say, Good Vibrations by the Beach Boys, you look for the code for that (say, A11) and punch that into the jukebox, which executes that ‘program’. Now, that proves that the Kolmogrov complexity of the ‘play Good Vibrations’ program is a couple of bytes long.

    But if I want my computer to play Good Vibrations, the simplest program that will do it is ‘playsound /media/disk/Beach\ Boys/Smiley\ Smile/Good\ Vibrations.mp3′ – that’s thirty-five times the length of the jukebox ‘program’. But that’s not all – you have to count the size of the ‘playsound’ program (15 kilobytes) and the MP3 file (3.8 megabytes). Moving our ‘program’ from the jukebox to my computer has made it several million times as long, because we’ve had to take information that was previously in hardware (the physical Beach Boys CD within the jukebox and the ability of the jukebox to play music) and convert it into software (the MP3 file and the playsound program).

    Now, I never normally talk about my day job here, because I don’t want to give anyone an excuse to confuse my views with those of my employers, but it’s almost impossible not to, here. The lab in which I work produces a piece of software which allows you to run programs compiled for one kind of computer on another kind. The product I work on allows you to take programs which are compiled for computers with x86 processors (such as, almost certainly, the one you’re using to read this) that run GNU/Linux, and run them on machines which have POWER chips, which also run GNU/Linux.

    Now, this program took many person-decades of work by some very, very bright people, and a huge amount of money, to develop. It’s a very complex, sophisticated piece of software. Every time even something relatively small is changed, it has to go through a huge battery of tests because something put in to make, say, Java run faster might make, for example, the Apache web server break. (This is lucky for me as it means I have a job). Even given this, it’s still not perfect – programs run slower on it than they would on an x86 box (sometimes not very much slower, but usually at least a bit slower), and there are some programs that can’t work properly with it (not many, but some). It’s astonishingly good at what it does, but what it does is, by necessity, limited. (To the extent that, for example, the programs still have to be for the same GNU/Linux distro the POWER machine is running – you can’t use it to run, for example, a Red Hat 5 program on a Red Hat 4 box).

    Now, this hugely complex, sophisticated, expensive-to-create program converts from one chip type to another. But both chips are Von Neumann architectures. Both use the same peripheral devices, and the same interfaces to those devices. Both are designed by human beings. And the people writing that program have access to information about the design of both types of chip, and can test their program by running the same program on an x86 box and on a POWER box with their program and seeing what the result is.

    Now, when it comes to the ‘program’ that is the genetic code, none of that’s true. In this case, the hardware+operating system is the cell in which the genetic code is embedded, plus the womb in which that cell gets embedded, the umbilical cord that brings it nutrients, the systems that keep the mother’s body temperature regulated, the hormone levels that change at different times… basically, instead of having two chips, both of which you can examine, and leaving everything else the same and trying to get your three gig program to run (which I know from experience is in itself a massive problem), you have to simulate an entire human being (or near as dammit) in software in order to run the genetic code program – which we’re running, remember, *in order to simulate part of a human being!*

    And you have to do that with no access to source code, with no way of testing like-for-like (unless there are women who are lining up to be impregnated with randomly-altered genetic material to see what happens), and with the knowledge that the thing you’re creating isn’t just a computer program, but at least potentially a sentient being, so a coding error isn’t going to just cause someone to lose a day’s work because of a crash, but it’ll give this sentient being some kind of hideous genetic illness.

    And what are you left with, at the end of that effort? A baby. One that can’t interact with the physical world, and that is many, many times slower than a real baby. And that will undoubtedly have bugs in (any computer program longer than about thirty lines has bugs in). And that requires huge amounts of energy to run.

    I can think of more fun, more reliable ways of making a baby, should I happen to want one.

    But the point is, that even that would be a phenomenal, incredible achievement. It would dwarf the Manhattan Project and the moon landings and the Human Genome Project. It would require billions in funding, thousands of people working on it, many decades of work, and several huge conceptual breakthroughs in both computer science and biology.

    Which is not to say that it’s impossible. I’ve never seen a good argument against it being theoretically possible to create artificial general intelligence, and I’ve not seen any convincing ones against the possibility of uploading and emulating a particular person’s brain state. And assuming that technology continues to improve and civilisation doesn’t collapse, it may well happen one day. But people like Kurzweil arguing that the relatively small size of the genome makes it a trivial problem, one that will be solved in the next couple of decades, are like those people who drew graphs in 1950 showing that if top speeds achieved by humanity carried on increasing we’d be at the speed of light in the 1990s. The first example of that I saw was in Heinlein’s essay Pandora’s Box. The retro-futurology page has an examination of some of the other predictions Heinlein made in that essay. Suffice it to say, he didn’t do well. And Heinlein was far more intelligent and knowledgeable than Kurzweil.

    And of course, hidden in that paragraph above is a huge assumption – “assuming that technology continues to improve and civilisation doesn’t collapse”. It’s that, in part, that I want to talk about in the next part of this, coming up in a few hours.

    Geeks Dig Metaphors: Introduction

    Posted in computing, science by Andrew Hickey on August 30, 2010

    A metaphor is a glorious thing,
    A diamond ring,
    The first day of summer
    A metaphor is a breath of fresh air,
    A turn-on,
    An aphrodisiac

    Chicks dig, dig, d-i-g, dig, dig metaphors,
    Sparks

    Metaphors are scraps of resemblance that tie us down. Ted Nelson

    Let’s talk about the Singularity.

    The Singularity is the closest thing to a religion for a large number of people who self-identify as ‘geeks’ at the moment. It’s also an idea which has been widely publicised in the last couple of weeks with the Singularity Summit having acheived some mainstream media coverage. The Singularity also *matters*, because it’s something that is believed in, to a greater or lesser extent, by a large number of tech billionaires – people who have actual influence and power.

    The Singularity is a term taken from a science fiction story, and has a variety of meanings – almost one for every ‘Singulatarian’, but roughly speaking they all believe that “The Unity of Nanoscience, Biotechnology, Information Technology and Cognitive Science” (to borrow from the title of a book I’m currently reading ) will lead to humanity becoming gods.

    From Saturday Morning Breakfast Cereal by Zach Weiner

    From Saturday Morning Breakfast Cereal by Zach Weiner

    That’s not an exaggeration. The basic belief of Singulatarians is that at some point in the future, we will have enough knowledge of biology, computing and nanotech that we can make ourselves immortal, upload our minds to tiny nanotech computers which will then take over the entire universe, turning it into computronium. We would then become immortal omniscient gods in a virtual reality indistinguishable from the real thing, except that it would be geared entirely to our personal happiness. There is another theory which states that this has already happened.

    The timescale of this is, of course, disputed. At one end is Ray Kurzweil, stupid person and author of the moronic The Singularity Is Near, who argues due to his ability to draw a straight line that by the year 2010 “computers will disappear as distinct physical objects, with displays built in our eyeglasses and electronics woven into our clothing” (three months to go. I hope I can afford new glasses and clothes before Christmas) and that we will all become immortal gods within his lifetime. Kurzweil is regarded as one of the greatest futurists of our time, and billionaire technologists fawn on his every word.

    At the other extreme is eccentric right-wing astrophysicist Frank Tipler, who also thinks this will happen, but believes it will take about fifteen billion years for us to become immortal, omniscient and omnipotent. Also, while Kurzweil bases his argument on drawing lines (and not even very accurate lines – I’m not usually a particular fan of P.Z. Myers, but I was very pleased to see that blog post from last year, because it saved me having to write the exact same thing), Tipler’s argument (in his fascinating The Physics Of Immortality (apparently out of print) and his frankly barking Physics of Christianity, but first put forward in The Anthropic Cosmological Principle , which is also the most sensible explanation) is based on known physics and makes testable predictions. However, unlike Kurzweil, Tipler also identifies the end result of his speculations with the Christian God, so he’s regarded as a crank, rather than an eminent futurist. (He *is* a crank, as anyone who reads his recent work, like this paper arguing that Obama is evil because of the existence of the luminiferous aether and the arguments put forward against abortion by the film Starship Troopers, will recognise, but he’s less so than Kurzweil). Tipler also argues that his Singularity will be an actual physical singularity – the universe will be condensed to a single point he calls the Omega Point.

    Now, these ideas are actually inordinately appealing, to me at least. I like the idea of becoming an immortal, omniscient superintelligent superbeing living in a universe which exists solely for my pleasure. That sounds like a good thing to me. And I’m probably far more likely to want the Singularity to be true than the average person, because I’m not especially attached to my body. I’m balding, asthmatic, dyspraxic, overweight, have migraines and mild psoriasis, and have various sensory issues, the combination of which means that most of the time the closest i get to physical pleasure is ‘in less discomfort’. Upload my mind into a computer capable of processing text and music, and you’ve kept 95% of my joys in life while losing 100% of the unpleasantness. That seems like a good deal to me – especially if that computer can then simulate my mind in any body I want.

    Unfortunately, I see three problems with the Singulatarian view – one technical, one poilitical, and one philosophical. All of these are surmountable, but there seems to be little evidence that any of them *will* be surmounted – or that any Singulatarians even consider these to be real problems.

    This post is getting much, much longer than I thought it would, so I’m breaking it into several sections. I’m still writing the other sections as I post this, but they’ll be posted as I get them done. I won’t be replying to comments til they’re all complete.

    Linkblogging For 26/08/10

    Posted in linkblogging by Andrew Hickey on August 26, 2010

    This month’s Superman/Batman has a good bit in it. I know, I’m as surprised as you. But here it is – The Joker and Lex Luthor as Calvin and Hobbes

    I don’t know if I’ve linked to him before, but Ben Gunn is a prisoner who’s spent thirty years in prison for a murder he committed when he was a teenager. He renounced all violence, decades ago, and is now the only blogging prisoner in the UK, and is also studying for a PhD in criminology. He needs money to pay his fees this year. As someone who finds it hard enough to pay for my own Master’s degree while working, I have a huge amount of sympathy for someone trying to study with no source of income at all, and would urge anyone who can afford it to help him.

    John Harris is one of the first sensible Labour supporters I’ve seen talking about the Lib Dems post-coalition.

    James Ward has tried to change his name to James Ward

    And hating cliches and special pleading is clearly the last acceptable prejudice

    Tagged with:
    Follow

    Get every new post delivered to your Inbox.

    Join 202 other followers