22 March 1996

Difference engines

On the limits of ‘information’

“The sky above the port was the colour of television, tuned to a dead channel.” Thus begins William Gibson’s 1984 novel, Neuromancer, the book bashed out on a manual typewriter that invented the (at the time) über-futuristic concept of cyberspace. The semantic technology of that first sentence, though, was as prophetic as any of Gibson’s noir-ish, seminally influential images of the future man-machine relationship. It posits an observer who, more familiar with the sight of a television screen than that of the sky, looks to the former in trying to make sense of the latter: technology becomes the primary source of reference when dealing with an alienated nature. And now, more than a decade later, the primary weapon of high technology, as it seeks to inform our dreams for the third millennium, is not economic or social, but metaphoric.

The history of technology is also a history of language. (In the 17th century, the word “technology” was used exclusively to talk about grammar.) Sometimes technology offers up its own words for public consumption; sometimes it steals words that are already there. This has always happened. The word “cliché”, for instance, is the French for “clamp”: Renaissance printers used to keep commonly-used phrases pre-set in a clamp on a shelf above the work-table, so that when such a phrase came up in a passage, it wouldn’t have to be set letter by letter. Meanwhile, the word “virus” has come to denote a self-replicating slice of machine code, programmed with malicious, data-destroying intentions; a “port” is the node of data exchange between two pieces of hardware.

Modern computer technologies have shown an accelerated yen for such metaphorization, ever since the “desktop” metaphor for the human-computer interface, now standard on all the world’s PCs, was developed by Xerox-PARC in the 1970s. A character in Douglas Coupland’s winsome 1995 examination of the American software industry, Microserfs, offers a state-of-the-art list: “Easter egg, platform, surfing, frontier, garden, jukebox, net, dirty linen, pipeline, lassoo, highway… We will have soon fully entered an era where we have created a computer metaphor for EVERY thing that exists in the real world.” Elsewhere, another character says archly: “I’m trying to debug myself.”

This is fine, organic play, which enriches the language. What worries some people, however, is when high technology exhibits an uncouth scientific hubris, and begins to explain things — specifically human things — in terms of itself. That is what Martin Heidegger meant when he spat that technology was “the completion of metaphysics”. “Meta-” here means “beyond”: Heidegger argued that metaphysics insists on looking beyond or behind Being for a final explanation or justification of it. Thus, technology wants to reduce life to technology — it is an instrument for the calculation and domination of living things. Nowadays, too, ripples of unease appear when researchers in cybernetics and artificial intelligence describe people exclusively in terms of “hardware”, “software” and (cutely) “wetware”.

A new collection of essays, Cultural Babbage1, describes the prehistory of this phenomenon, examining the ways in which new technologies of the 19th and early 20th centuries shaped popular language and imagery. As Doron Swade notes, Charles Babbage’s Difference Engine (the first automatic computing device, a portion of which was demonstrated in 1832) saw the dawn of modern technofear. It was not the first automatic machine of the industrial movement, “but it is a landmark in respect of the human activity it replaced. In the case of textile machines or trains, the human activity they replaced was physical. The 1832 engine represents an ingression of machinery into psychology.” For the first time, a function of the human mind was periphalized.

The continuing “ingression of machinery into psychology” by scientists characteristically provokes two responses. The optimistic one predicts that humans will become cyborgs (short for cybernetic organism: an enhanced amalgam of flesh and machine). The pessimistic one predicts that humans will become redundant. Last week, Kevin Warwick, Professor of Cybernetics at Reading University, gave a lecture at the Royal Society entitled “Prospects for Machine Consciousness”, which fell squarely into the second category. Since machines have so far become able to do certain things better than humans, he argued brightly, it follows that in the future they will be able to do everything better than humans, including thinking. Among the various video clips he screened, including robots building cars, robots playing snooker, robots (slowly) shearing sheep, was one of a machine named “Wabot” which “played” the organ. Wabot has arms and fingers, feet to manipulate the pedals, and a video camera for eyes. It can read printed sheet music and play the notes entirely accurately.

“That points to the act of playing music as being not much more than working on a car-production line,” Warwick said, triumphantly. Well, nonsense. A musician would have told him that traditional notation is a notoriously information-poor system, nothing like a complete blueprint for a musical event. The purely physical business of getting the notes under your fingers, which Wabot can do perfectly, is but the tiny precursor to the business of constructing a meaningful performance, which Wabot showed no signs of ever doing. Warwick’s lecture, indeed, offered no explanation of what the metaphor of “machine consciousness” might ever mean. It was salutary evidence that those at the vanguard of technology, those able to sell their metaphors to the rest of us, are no longer the people playing with robots in laboratories, but the dreamers.

After all, it was partly because William Gibson made his electronically-prostheticized “cyberpunk” hackers so cool, so anarchically heroic, that he saw cyberspace made a reality by the very (flattered) programmers who read Neuromancer. It is these men and women, blissfully optimistic about technological progress, who are celebrated (with some reservations) in two other new collections, Futurenatural and Technoscience and Cyberculture 2 In a detailed essay in the former by Tiziana Terranova, the Internet is seen to be primarily a forum for discussion about technology as an instrument for radical change. Here be Extropians, a hardcore offshoot of the cyberpunks, who desire to be “posthuman”. According to a message posted in their newsgroup:

Posthumans have overcome the biological, neurological and psychological constraints evolved into humans. Posthumans will be… partly or wholly postbiological — our personalities having been transferred ‘into’ more durable, modifiable, and faster, and more powerful bodies and thinking hardware.

This is not just digitalized Nietzsche, a vision of evolution into Superman. The discontinuity implied by “transferred” signals that this is a project of catastrophic forgetting. And it ought to be taken seriously, since the most powerful metaphor of the computer age is that of memory. Computers promise to “remember” things, storing data faithfully on their hard drives, but the corollary is that users are tempted to forget. Some effects of this, naturally, will be more dangerous than others. It does not really matter if people forget how to do mental arithmetic, as long as everyone gets a pocket calculator. (Although asking a machine to multiply 12 by nine is a far slower, more laborious process than doing it yourself.) It does not really matter if people forget how to write by hand, as long as everyone has a computer. (Although most of the world’s population, as yet, has never even made a telephone call.) What may matter is if people forget how to remember, at all. Technology brandishes the carrot of permanent, immortal memory, but also the stick of instant, irrevocable deletion.

The recent Hollywood sci-fi movie, Johnny Mnemonic (writer: William Gibson) provides one entertaining analysis of what it would be like if computer memory and human memory were translatable. Johnny (Keanu Reeves) is a 21st-century courier, who smuggles sensitive information inside his head. Gigabytes of information are encrypted and uploaded into him via a chip implanted in his brain, and decrypted and downloaded at the other end. Johnny really is a mnemonic: he helps other people to remember. But there is a price. Reeves’s typical blankness is perfect for Johnny’s poignant secret: in order to free up enough capacity to get a job as a courier, he tells a friend: “I had to dump a chunk of long-term memory… my childhood.”

Less silly is the James Cameron-scripted thriller, Strange Days, set a mere four years into the future. Virtual reality has become the real thing, only amenable to instant recall. Using a piece of headgear known as a Squid (“superconducting quantum interference device”, coined by: William Gibson), you can record slabs of full sensory experience, and later, you or other people can play them back. “You’re there — living it,” says the hero, Lenny, a trader in such clips pitching to a client. But Lenny is a victim of his own product: he spends his spare time replaying recordings of sex with his ex-girlfriend; yet the reason the relationship failed in the first place was because he insisted on recording everything. It is the logical continuation of the modern camcorder fetish: blinded by the promise of total recall, the user cannot bear to let anything important pass by without being recorded. He thus spends the most significant episodes of his life willingly marginalized, looking through a real or metaphoric lens. The primary experience is reduced to an experience of recording; the “memory” to a memory of recording.

Thus technology, in its gleaming efficiency, demands that we distrust our own capacities for living without it. Life becomes no more than a constant hedge against forgetting, because users have forgotten their own capacity to remember. But constant, pathological repetition is no worse than another option afforded by the new technologies. Brian Eno, one-time conceptual king of Seventies glam-rock, was last week hawking the absurd idea of “generative music”: in the future, music will be created by computer and will never repeat itself. “Our kids will ask us incredulously, ‘You mean, you used to listen to the same thing over and over again?’ ” Eno beamed. This is an original vision, certainly: forgetting is absolute, because nothing is ever repeated. It amounts to an enslavement to novelty. Again, Heidegger pointed out a comparatively long time ago that technology’s essence is “enframing”: our freedom to act can be circumscribed by the technologies on which we have come to depend.

But of course, human memory is just not like its computer metaphor. In Strange Days, Lenny’s only friend spells it out for him: “Memories are meant to fade, Lenny. They’re made that way for a reason.” Human memory is fallible, capricious: think of the finals student who can remember every word of “Smells Like Teen Spirit” but not a line of Coriolanus. But, modern psychologists point out, human memory is also constructive and interpretative, able to make connections and analogies. Even if thinking machines were developed that could do this, there is no reason why they should do it in ways that would be useful to us. The allure of technology’s fantastic storage capacity is tempered by its alienation from the concept of value.

It is as well to remember that information is not knowledge. In the “information age”, terabytes of data are ever more freely accessible. But in computing terms, “information” has nothing to do with value or sense: under the strict communication-theory definition, a paragraph of meaningless garbage can have exactly the same information content as this paragraph. How should we identify nuggets of truth in a disorganised global soup of data? Jean-François Lyotard explained the implications of the much-touted “information economy” as long ago as 1979, in The Postmodern Condition: “Knowledge is and will be produced in order to be sold, it is and will be consumed in order to be valorized in a new production: in both cases, the goal is exchange. Knowledge ceases to be an end in itself, it loses its ‘use-value’… The question… is no longer ‘Is it true?’ but ‘What use is it?’.”

But retrieval is not the same as understanding, and someone will still have to ask ‘Is it true?’. Given the new millennium’s magical, slate-cleaning invitation to rewrite the human — to indulge in neuromancy — artists are extrapolating possible futures with unusual vigour. Two ideas of the future — the heavenly and the hellish — are vying for currency, shadowed for example by the two contemporary metaphors for the Internet: it is a Net, a zone of entrapment, monopoly and control by corporate powers; it is a Web, a zone of childlike, anarchic play. When the smoke clears from the battlefield, we will choose the truer metaphors to describe technology to ourselves, and — maybe — vice versa.

  1. Cultural Babbage: Technology, Time and Invention, eds Francis Spufford, Jenny Uglow (Faber).
  2. FutureNatural: Nature, Science, Culture, ed John Bird (Routledge); Technoscience and Cyberculture, eds Stanley Aronowitz, Barbara Martinsons, Michael Menser, & Jennifer Rich (Routledge).