Singularity Shadow


Russell Blackford

(first published 1998 in Quadrant magazine)

In arguing his case for human exploration and colonisation of Mars, astronautic engineer Robert Zubrin makes the surprising claim that we have seen, in recent decades, a decelerated rate of technological innovation. He says, "the rate of [technological] progress within our society has been decreasing at an alarming rate." To correct this will require a grand technological adventure.

A very different view from Zubrin's emphasises the extraordinary progress in the recent development of biotechnology and computer power. In The Spike: Accelerating into the Unimaginable Future, Damien Broderick integrates and develops a range of information and views that suggest we are headed, not necessarily for Mars, but for a set of technological changes on Earth so dramatic that we cannot imagine them or speculate beyond them. In that vision, far from going through a period of slowed innovation, we are developing technologies that soon will transform our environment and ourselves in unprecedentedly fundamental ways. Presented as a graph of time and change, technological progress is about to spike upwards toward infinity.

To illustrate his thesis of faltering technological innovation, Zubrin invites us to compare the last thirty years (looking back from 1996) with the previous thirty, and the thirty before that. He portrays the years 1906 to 1936 as a time in which "the world was revolutionized" by electrification, the spread of telephones and radio broadcasts, the early development of motor cars, motion pictures and aviation. From 1936 to 1966, the world changed further with "communication satellites and interplanetary spacecraft; computers; television; antibiotics; nuclear power", with impressive rocket boosters and aircraft. Then comes a key passage:

  • Compared to these changes, the technological innovations from 1966 to the present seem insignificant. Immense changes should have occurred during this period but did not. Had we been following the previous sixty years' technological trajectory, we today would have videotelephones, solar-powered cars, maglev (magnetic levitation) trains, fusion reactors, hypersonic intercontinental travel, reliable and inexpensive transportation to Earth orbit, undersea cities, open-sea mariculture, and human settlements on the Moon and Mars. Instead, today we see important technological developments, such as nuclear power and biotechnology, being blocked or enmeshed in controversy--we are slowing down.
  • Some of that controversy springs from irrationalist moralising. But nuclear power? If the proliferation of nuclear power has been slowed, that is because many of us see an immense risk to our environment posed by reactors with unreassuring safety standards—and see, too, a connection between the availability of nuclear fuels and that of high-grade fissile material for warheads. There are strong arguments to replace pollutive coal-fired power stations with nuclear stations, but the cautionary responses are not merely irrational, as Zubrin seems to think. Nonetheless, he raises a striking and legitimate question: Is the pace of innovation slowing?

    First of all, Zubrin is a passionate and committed advocate, one who tends to fudge. Motor cars and aircraft, electricity and the internal combustion engine were invented before 1906, and what happened thereafter was a tremendous development and proliferation of those technologies. To play fair, we need to include the development and proliferation from 1966 to 1996 of technologies that were initiated earlier. Once that is acknowledged, it seems apparent that electronic and biomedical technologies have produced immense change in recent decades.

    This leads quickly to a related criticism of Zubrin, his fixation with technologies that have a direct macro-level presence in our environment, big things that become part of the landscape, like air and space vehicles, movie screens, lit up cities, nuclear reactors. Although he does mention antibiotics and expresses regret at the controversy enmeshing modern biotechnology, he largely forgets technologies that are hidden away in the recesses of our homes, offices or vehicles, or our bodies themselves. Most glaringly, he omits to mention that the contraceptive pill was invented in the early 1960s (analogously to powered air vehicles in the early 1900s). Thereupon, the rapid development and distribution of powerful contraceptive technology largely overturned the sexual mores of the Western world.

    Leaving sex out of it, a reading of Peter Singer's Rethinkng Life and Death provides a nice corrective to Zubrin's account; Singer shows how modern biomedical technology has transformed clinical practice, rendering our traditional ideas of life and death themselves problematic, and challenging us to recast some of our most basic ethical principles. It is likely that the next thirty years from the point when Zubrin was writing—the years from 1996 to 2026—will see the development of unprecedentedly powerful biotechnologies based on the human genome research that has already commenced.

    Unless (or until) we use our genetic knowledge to morph our bodies in outwardly spectacular ways—growing ourselves fins, spines, additional limbs, or modified skins of some kind—advanced biotechnology will not alter the streetscape, let alone the landscape. But it would be naive to deduce from this that its effects will be isolated or trivial.

    Computers are assigned by Zubrin to the period from 1936-66, notwithstanding that the development of computer technology by the mid-60s was in what now seems a primitive state. If the development of electricity and the internal combustion engine is to be assigned to the 1906-36 period, the development of digital information technologies must fairly be assigned to 1966-96. During that time, computers became faster, smaller, more flexible, more powerful—all those things, and almost ubiquitous. We are not confronted with giant mainframes lumbering about the streets like dinosaurs, but the tools we use are computerised, and the world is linked and moulded by computerised systems of transport and communication.

    Zubrin's account of what should have happened in recent decades lists the following among the technologies that were never developed: "videotelephones, solar-powered cars, maglev… trains, fusion reactors [and] hypersonic intercontinental travel". But such a list contains preconceptions of what the future (seen from the mid-1960s) should have been. Perhaps some of these marvels will yet come to pass. Solar-powered cars would make sense environmentally, and the technology is improving to an impressive degree. More apparent since the 60s, however, has been an enormous increase in the detailed sophistication of motor cars from the uncomfortable, unreliable creatures of thirty or forty years ago. Modern cars, with their wonderful electronics and materials and their built-in computer technology show remarkable consideration for our needs, sometimes even being programmed to study and adjust to our driving habits. The damn things are almost alive, dominating us subtly, like lovers or designer drugs.

    As for videotelephones, the telephone is not an extension of our capacity for direct, face-to-face communication, an extension of our presence. It is actually an alternative, a device that makes intimate communication possible, not only when we are physically separated, but when physical presence would be inconvenient. With a phone, we can communicate early in the morning before making ourselves appear human, during the day at the office when we may be bad tempered and red-faced, late at night from the informal comfort of our bedrooms.

    It is likely that prototype videotelephones did not catch on for these sorts of reasons. Anyway, if I can project my presence into your home or office or vehicle, I would rather wait until the technology lets me provide some computer-massaged idealisation, rather than the real me, in whatever irrelevant state of dress or undress, activity or sloth I find myself. Video conferencing finds uses in situations where telepresence is relevant and desirable, but that is just the point: technologies find their own uses. In retrospect, it is unsurprising that the miniaturisation of the telephone and its integration with fax and the Internet have proved to be higher priorities than augmenting it with the dubious convenience of video.

    As we work through the list, it becomes apparent that we have been developing our technologies to make our environments and even our bodies more convenient, comfortable, conformable to our wills. We control our fertility by various effective methods that are now available, shaping our lifestyles accordingly. In the home, the car, the factory and the office, we have more and more sophisticated electronics. Often we ask ourselves how we coped at all without word-processing, photocopying, and fax machines—and without scanners, PowerPoint projectors, e-mail and the World Wide Web. Job classifications and duties have become unrecognisable from the perspective of the mid-60s. Industry is transformed by computer-aided design and manufacture. Technological innovation has taken paths different from what would have been expected if, as Zubrin puts it, we had "been following the previous sixty years' technological trajectory".

    Damien Broderick's list of forthcoming innovations is very different from those mourned by Zubrin, and far more dramatic in its potential to change absolutely everything. Broderick mentions the following: "full-scale, sensory-immersion virtual reality… molecular manufacturing (also known as nano-technology)… genuine artificial or machine-augmented minds… leading swiftly to superintelligence"—all this plus advanced biomedical science that will provide a base for cloning, genetic enhancement and extreme longevity.

    Given a sharper focus, this viewpoint foresees the emergence of superintelligent Powers at some point in the first half of the twenty-first century. At that stage, our own creations will equal us, and then surpass us, so that all bets are off about the more distant future. It may be our termination date, the end of humankind, at least as we know it. A point of infinite possibility is reached, a point of "technological Singularity", as Vernor Vinge christened it in a celebrated 1993 NASA symposium address. Vinge's expectation could be met by developments in any of several biomedical or digital technologies, but it is most firmly based on the convergence of two simple facts: the information processing power of a human brain is finite; that of computers is increasing, seemingly without limit. If we had sufficiently powerful computer hardware, we could replicate, or at least simulate, a brain's activities, and there seems no reason why will not design adequate hardware for the job—and sooner rather than later.

    Simulation is not the same as replication. After all, a computer model of a tornado, however accurate and detailed, does not possess the twister's real-world ability to uproot trees, hurl motor cars, destroy houses, take human life in its path. In his comprehensive study The Conscious Mind, however, David Chalmers puts the argument with rigour and conviction that simulation is replication in the case of artificial consciousness. Chalmers' argument is difficult, but it is basically that some properties are organisationally invariant across underlying systems, whether these be physical or merely computational. Mental properties may turn out to be organisationally invariant in this sense; indeed, there seems to be no reason for a mind to "need" one underlying system rather than another in the sense that the tornado "needs" physical air to cause damage in the physical world. Writing in 1997, Broderick refers to an IBM computer capable of carrying out three trillion floating point operations per second as "on the drawing boards". In attempting to provide a reasonable estimate, based on current neurological knowledge, of what would be required to replicate a human brain's activities, Broderick draws on an analysis by the celebrated mathematical physicist Frank Tipler. In an effort to be conservative about the possibility of achieving the goal, Broderick is prepared to assume that the brain is able to process information at the rate of one hundred thousand trillion operations per second, or one hundred thousand "teraflops". According to this assumption, we would we need to increase the processing power of our best computer by a factor of thirty-three thousand, if we hope to "run" a human-level mind on it.

    The thought of producing a computer thirty-three thousand times more powerful than anything now available suggests that a huge gulf exists between the current abilities of digital technology and that needed to reach the projected Singularity point. If this figure has any validity, science fiction visions of current computers taking over the world are—well, "just science fiction", to use a phrase beloved of unimaginative journalists.

    But Broderick reminds us that there's a catch in this argument, for numbers such as thirty-three thousand can be gobbled up quickly by a relatively small number of doublings. A simple calculation shows that, if the processing power of our largest available computers were to double every year, we would reach the point of having a computer thirty-three thousand times more powerful than any currently available… in just over fifteen years. If it takes two years each time, that takes us until the year 2027. Using information from robotics guru Hans Moravec, of Carnegie Mellon University, Broderick asserts convincingly that computer power per dollar is doubling every eighteen or even every twelve months. If we are prepared to adopt the concept of cyberbang per buck as a reasonable proxy for our ability to build more and more powerful hardware, then even the more conservative figure is sufficient to produce a startling result: it suggests that we will have very advanced hardware, powerful enough to run a human mind on, by about the year 2020. Immediately after that, the possibilities are endless, we indeed reach superintelligence, the human mind is obsolete—it's Singularity time.

    Actually, Broderick's assumptions appear to be maximally conservative when we return to the analysis by Tipler on which it is based. Tipler refers to various estimates of the brain's capacity for data storage and information processing, based on a seemingly reliable estimate that it contains something of the order of ten billion neurons. Informed estimates of the brain's information processing power vary by several orders of magnitude: from about ten billion operations per second ("ten gigaflops") to the hundred teraflops used by Broderick, which Tipler has—in his turn—adopted from Jacob Schwartz as an outside limit. Tipler himself prefers a figure of ten teraflops, taken from an analysis by Moravec, and based upon the latter's "careful analysis of the information processed in the retina and the optic nerve". If this estimate is accurate, the most powerful computers are already within an order of magnitude of the processing power of the human brain. Thus, if Moravec and Tipler are correct, the hardware capacity to run a human-level mind-program is almost available even now and will ultimately be available not only in high-powered laboratories but also in the devices sold to ordinary households.

    More recently, Moravec has re-analysed the relationship between the capacities of the human brain and computer hardware in a article published on the World Wide Web in the electronic magazine Journal of Transhumanism. Measuring information processing speed in millions of instructions per second, or MIPS (rather than the more sophisticated floating operations per second used by Tipler), Moravec estimates that mimicking overall human behaviour would require a universal computer capable of 100 million MIPS, while the most powerful supercomputers of 1998 are capable of a few million MIPS, easily placing them within two orders of magnitude of human brainpower.

    The range of estimates as to how much general computer power is required to match the brain's information processing power is itself a reason for some scepticism: our understanding of the brain and its workings is at an early stage, and any estimate we are offered may be grossly unreliable. This is a good reason to be cautious about the relative capabilities of computers and brains, as is Broderick in using the most liberal of the estimates of brainpower discussed by Tipler. Nonetheless, the brain is physically finite and must have some computational limits. Even if a figure highly unfavourable to artificial intelligence researchers and prophets of a technological Singularity is adopted, it is likely that sufficiently powerful general computers will be available in research laboratories by 2020. The more enthusiastic assumptions adopted by Moravec suggest that home computers of sufficient power will be available in the 2020s.


    Will the end of human history as we know it come about in this way, with the supercession of the brain? I can see no decisive counter to the views of Vinge, Moravec and Broderick.

    It may be suggested that there are natural limits to how powerful we can make a computer, that we will never be able to manufacture hardware superior to the brain's wetware, evolved over hundreds of millions of years. In that case, the steep curve of progress in the development of computer processing power will start to level out. Yet, Moravec's estimates suggest that we are already close to the goal in the raw information processing power of advanced scientific computers. Furthermore, there is no indication that fundamental limits are being approached. Relativity theory tells us that nothing that physically moves inside a computer can travel faster than the speed of light, which implies one physical limit, but this is not an issue: there is no suggestion of building gadgets that sprawl across the landscape, hindered by time-lags as all-too-slow electrons cover the huge internal distances of their circuitry. As new forms of hardware become increasingly miniaturised, the same amount of processing power occupies less and less physical space. The capabilities of current technologies may run out, but new ones will be developed, based, perhaps, on carbon nanotube components, single-molecule switches, or quantum-interference circuitry. There is no need to fear (or hope) that we will hit a fundamental limit to information processing capability before we have the physical capacity to replicate a human mind on computer hardware, and then to run a superintelligence, a Power.

    We should assume that the information processing abilities of twenty-first century computers will be, for all practical purposes, infinite, and then work out the implications. Before we come up against any physical limits, and as we approach a situation where we have all the hardware grunt that we need for any conceivable purpose, we will encounter problems of choice, issues of what are possible, convenient, and ethical uses of twenty-first century computers. Within years from now, extraordinarily advanced hardware will be imminently foreseeable—will no longer be dismissed as "science fiction". This will be well before the year 2020. Before we build a computer capable of running a human-level mind, we will be asking ourselves questions such as why we want a device that powerful.

    There will, indeed, be uses for the very advanced hardware of the early twenty-first century that have nothing to do with superintelligence, consciousness replication, or the abolition of humankind. It will be required for modelling highly complex systems, perhaps for densely-textured economic or climatological analyses or for tasks involving better-than-human levels of pattern-recognition. Perhaps, though, we will have hardware powerful enough for any purpose of this character long before we have the software abilities to use it for ultra-detailed climatological modelling or superhuman pattern judgments (such as for mineral exploration), let alone before we are tempted to run minds or create Powers. So what are the implications of this?

    Thinking about Zubrin's examples, it seems that technological innovations take directions that cannot readily be guessed in advance. Technology finds its own uses as it develops, uses that meet people's convenience. As innovations develop and proliferate, they feed back into the larger structures and functions of society, reshaping it in the process and altering our ideas of what is, after all, expedient, convenient, needful.

    As very advanced hardware appears on the drawing boards, then in labs and offices, uses will be found for it, but no one can foretell what events will actually unfold. To date, we have not built more and more powerful motor cars, faster and faster commercial aircraft. We have been interested in sophistication and elegance, in comfort and convenience; high-tech has penetrated into the details of our environment, our tools, our bodies. Perhaps we will use very advanced computer hardware in some analogous manner that I cannot imagine, without ever reaching the Singularity point. After all, the existing applications of powerful computers include chess-playing and the creation of detailed animation for use in cinematic special effects.

    Clearly, though, we will find ourselves thinking about the potential of a technological Singularity, worrying at it, some of us taking steps that others consider dangerous or blasphemous. We must assume that a time will come, seemingly in a couple of decades—even less according to Moravec—when we have the hardware capacity to implement a conscious mind, if that is what we want. As that time approaches, it will cause us angst.

    I say "the hardware capacity" advisedly, because there is also the question of software capabilities. Even if we have a computer able, in terms of sheer processing power, to replicate a human brain's functioning, will we be able to program such a device to perform to its limits? This is more problematic. Tim van Gelder, a University of Melbourne philosopher, has made the natural point that we are nowhere near translating the informal kinds of thought involved in human common sense into formal systems that can be manipulated by computer programmers. To take this further, it is notable that our rate of progress in understanding basic concepts to do with the human mind's functioning—such concepts as meaning and interpretation—gives no cause for optimism that we will ever be able to formalise the full repertoire of human thinking. In that case, a programmer, may never be able to "write" a legitimate "mind program" to be implemented on the computers of the future.

    Worse still, consider what is involved in attempting to replicate the basic experiences of pleasure and pain (and their near-relatives such as hope and fear, joy and despair, satisfaction and frustration) by implementing a computer program. It is one thing to claim that an advanced computer, as it runs an appropriate program, may be intelligent, or even superintelligent, in its purposeful problem-solving abilities, like IBM's chess-playing Deep Blue with its capabilities extended to the nth degree. The issue is more challenging once we try to imagine what kind of internal structure a mind program would need to have if its implementation were to involve the experience of actual pleasure, pain and related feelings. Our intuition may tell us that these qualities, at least, are uniquely a property of organic life-forms such as ourselves, that they are not organisationally invariant as suggested by Chalmers. In any event, devising a program that generates the phenomena of actual pleasure, pain and so on may simply be impossibly hard.

    If so, this might evoke a terrifying Terminator-style scenario. What moral status would we assign a Power with superintelligence across (let us concede) all the varieties of human thinking, but no actual feelings, positive or negative, no morally significant consciousness or experience? Such a being might be purposive, in that purposes might be ascribable to it in a way that is predictively powerful. Yet, any such purposes would be deeply alien to ours. Fortunately, it is difficult to see why such a being, particularly one with malevolent purposes, would ever come to be designed. That, at least, is not a likely path to humankind's abolition.

    Even if we could program an advanced computer with the experience of mental phenomena such as pleasure and pain, Why would we ever want to do it? Of what convenience would it be to bring into existence a new kind of being that has feelings that we are not confident we share even with animals, at least when we look at creatures less complex than mammals? Such hedonic cyberbeings would compete with us for their preferences to be satisfied, challenging all of our ethical systems in so doing. Why would we want this?

    I would be interested in anyone's detailed scenario as to, first, what series of experiments would ever convince us that we had programmed an advanced computer for morally significant experience, and, second, how such experiments could be conducted ethically. Nonetheless, I hesitate to rule out the appearance of Terminators, Powers and other superintelligences, because it seems possible in principle for us to establish conditions whereby such beings might organise themselves into existence spontaneously, perhaps by an evolutionary process that we could set up within a very advanced computational substrate of some kind. The jury is out on these scenarios, and I do not wish to appear so sceptical that I deny the possibility of a technological Singularity, or that we may be entering its shadow.

    One solution to the problem of how we are to reach the technological Singularity (if this is, exactly, a problem) may be to try to "upload" the experiential contents of living human brains onto very advanced hardware, thus by-passing the programming challenges altogether. We would need to analyse the brain's structure in detail at the finest relevant level, then model it and "run" the result. Such directly uploaded minds might be apt for evolution or modification into Powers, though this also raises difficulties, for how would such modifications be made?

    In any case, this path to the Singularity would require a technological method of analysing the billions of neurons in a living brain and their inconceivable multitudes of synaptic links. While that might prove possible with some futuristic technique of brain imaging, it is difficult to imagine a method that could be both effective and non-destructive. We might have to destroy the brain's fragile structure with an electron or gamma-laser burst powerful enough to obtain the detailed three-dimensional image that we want, or to take the brain apart in microscopic frozen sections before we could manipulate it at a fine enough level. Why would I want something like this done to my brain just so that its neuronal connections and their activities could be replicated by a computer?

    Actually, I can think of some answers, as in my science fiction story "Lucent Carbon", but I find it difficult to imagine a realistic scenario in which what is described is truly a continuation of my life into a form of immortality. If such uploading ever became available, it would create controversy between those who welcomed it as a form of immortality and those who saw it as a combination of murder and blasphemous abomination, the replacement of human beings with digital monstrosities replicating (or merely simulating) their brain functions.

    Zubrin complains that biotechnology and nuclear power are currently "blocked or enmeshed in controversy". Such controversies can only increase. The day approaches when very advanced hardware is available, a substrate growing powerful enough to implement a human-level mind-program if only we want—and can work out how—to use it that way. There will be intense debate about this technology—even as near-infinite processing power is inevitably routinised by commercial applications. In the public understanding, awesome levels of computational power, attainable then attained, will be associated with other powerful technologies that will be seen approaching like a promise of heaven or hell: nano-technological manipulation and manufacture (in which individual molecules or atoms are moved about for the required results); extreme forms of virtual reality; sensational outcomes from genetic research. The current debate that Chicago scientist Richard Seed has provoked about human cloning—an idea that used to be "just science fiction"—is a taste of things to come. Some opposition to the twenty-first century technological possibilities will be rational and rigorous; some of it will be based upon ill-conceived applications of religious morality or on irrationalist secular ideologies. In any case, the possibilities will rise up like a gigantic spike blocking the future, to re-apply Broderick's metaphor, and they will cast a shadow across the future-present, across the world of 2005, 2010, 2015. In 1998, none of us can credibly foretell whether we will reach a technological Singularity point—yet very soon we shall be living in its shadow.


    Back to Home