Winter of Our Consciousness

Some thoughts by Djibril

About five years ago, my good friend Joseph Kaye and I were strolling one lunchtime across one of the campuses of a Californian university, deep in conversation. At the time, we had both been reading a lot of William Gibson, and on this occasion were particularly absorbed by an idea he had first explored in the short story ‘Winter Market’ (1986), the story in which a young disabled girl, who spends most of her life creating cyber-art, and so is online more often than not, finally decides to have her personality uploaded onto a super-powerful server so that she can continue to live in cyberspace when her body dies. (Gibson re-uses this idea in Mona Lisa Overdrive (1988), when he has the Count do the same thing after he dies.) Other Gibson characters have continued to function after death using recorded simulations, such as Dixie in Neuromancer (1984) and the Finn in Count Zero (1986), but it is only the Count who literally lives in Cyberspace and transfers his consciousness onto computer at the moment of death. (This theme has also been used in, for example, Roger Zelazny's Lord of Light (1967), Greg Egan's Permutation City (1994), and Richard Morgan's Altered Carbon (2002), but it was the Gibson stories that I read before coming across any of these others.)

(Digression: This discussion assumes that—as recent experiments suggest may be more and more likely—the human mind can been read and recorded in terms of the electro-chemical reactions of the physical brain. I mean neither to deny nor ignore the fact that a human being's personality, consciousness, and experience is the product of both genetic and environmental factors; that the flesh-and-blood make-up of the brain is shaped by socio-historical events as well as the personal experiences and background of the individual, including the memetic influence of everyone they have encountered, in their formative years and beyond. Nor does the human mind have conscious access to all of its memories at any one time: we are both conscious and unconscious; we are ego, superego, and id; we dream, we panic, we can be misled, and we fool ourselves. All of these essential features of the mind, memories, emotions, instincts, and impulses cannot be neglected in any mapping of consciousness, but they are all expressed by the neurophysiological make-up of the living brain.)

(I am also assuming that, for all the incredible advances in neuroscience and brain-mapping, including the astonishingly ambitious Blue Brain project sponsored by IBM, early twenty-first century science has made but a tiny shuffling first step on the marathon course to achieving this full personality scan. But there is no reason we should consider it impossible.)

Anyway, something about this scenario was troubling me in particular. The conversation ran something like this:

Dj: But it wouldn't really be me.

JK: It would be, in every real sense, if it was a perfect representation of your consciousness and personality—it would think and act exactly like you.

Dj: Yeah, it would be exactly like me, but it wouldn't be me. I would be dead, and as far as I was concerned there would be nothing—I wouldn't experience what the simulation was seeing or doing.

JK: Yes, you would, because it would be you. It would have all your memories and your formulative experiences, and your consciousness.

Dj: But that doesn't make it me.

JK: Wrong! Absolutely it does.

Dj: No. I mean, you could create a perfect recording of my personality now, and switch it on while I'm still alive, and it would act exactly like me, as we've been saying, but it wouldn't be me. My awareness wouldn't suddenly switch over into the computer then, nor would I be conscious of both sets of experiences simultaneously.

JK: That wouldn't be you because you'd both have different experiences from that point on, so would become different people. That's not the scenario Gibson imagined, anyway. The simulation isn't switched on until the very moment you die, so that doesn't occur.

Dj: But that's beside the point; the point is there's no awareness transfer from me to the computer when I'm alive, so equally if it's switched on when I'm dead: from my point of view I'm dead, period. The point is that the computer is a copy of me—and there can be as many copies as you like—but a copy after I'm dead is still a copy.

JK: No, the two are not the same thing at all. If your consciousness is uploaded at the moment of your death, and the simulation starts running at that very instant, then the personality in the computer is absolutely, in every meaningful sense, you.

Dj: In every meaningful sense to it, sure, but not to me.

JK: It is you!

The conversation continued in this vein throughout lunch; ultimately we were talking at cross purposes, and we never reached a resolution. I later learned that we were rehearsing a debate being conducted by psychologists and philosophers of consciousness today.

To summarise the argument, no doubt inadequately, as I have learned it from a review of Daniel Denning's work: some philosophers allow the possibility of an unconscious 'zombie'. A zombie would be like me in every way, having been perfectly correctly programmed to behave as I do, or as I would in any given circumstances, but it would lack the qualia that allows me to experience my own consciousness. (Qualia is a Latin term meaning something like the essential stuff of which something is constituted, so doesn't mean much more here than je ne sais quoi, I suppose.) The zombie therefore is not much more than an analogue of the linguistic homunculus in a box writing Chinese, or a Turing-test AI fooling a human correspondent. My zombie would be able to talk about consciousness, or even write this article, so the theory goes, but without ever feeling or experiencing as I do.

I see now that Joseph was right to say that a perfect simulation containing my uploaded memories could not—as Denning has shown—be a 'zombie'. If it truly was sophisticated enough to behave exactly as me from now on, then it must have the experience of consciousness as well as the appearance of it. It could not emulate me without partaking of qualia.

I still feel, however, that although it might have its own qualia, it would not have mine, or at best a perfect copy of mine. My experience of dying—and the ensuing non-experience of oblivion that is death—would not be altered by having a homunculus in my image. It would not mean that I lived on beyond death with my awareness transferred to a computer, only that a copy of me lived on beyond my own lifespan.

In what sense, then, can Gibson's (and Zelazny's et al.) mind-upload be said to confer immortality? If I don't live on, myself, why should I want a copy of me to do so? There may be several reasons, all deeply rooted in human desires, needs, and fears. (1) The zombie might be a comfort to my loved ones; (2) it might represent my legacy, both in the Platonic sense of intellectual work and the more emotional sense of progeny; and (3) it might represent the ultimate trusted heir.

(1) The first of these is perhaps the most banal: that the zombie might be a comfort to my loved ones after my death is not, of course, irrelevant, but it is possibly cold comfort. I don't know how comforted I would be by a homunculus—no matter how convincing—of a lost loved one. I suppose my preserved consciousness could tell people left behind all those things that I had no time to tell them when alive (especially if I died suddenly), but then again maybe I shouldn't allow my personality to live on and bore them interminably. Wouldn't the presence of an uploaded zombie make it difficult for them to let go, to grieve, to find closure?

That might not necessarily be a bad thing, I suppose, since if I am not altogether gone, my loved ones need not grieve so intensely. Are closure and 'moving on' not just defense mechanisms, after all? If they have time to get used to my being gone, or at least available to them in a different form, it might let them come to terms more gently. I don't know, but from this perspective I'm not altogether comfortable with the idea.

(2) The second sense in which personality upload might be analogous to immortality is that of leaving behind a meaningful legacy. Even if I live a long and full life I shall almost certainly not have time to write everything I want to write; there will be stories unwritten still living in my head at the end of my life; there will be reviews, opinion articles, theories; academic papers, novels, philosophical discourses. And that's only writing: what about all the paintings I've never painted, the songs I've never composed, the games I've never designed, tricks I've never played?

We all feel the need to leave a legacy behind us; this need has been identified as stemming from fear of death. Plato argued that a man could achieve a level of immortality by leaving a child beind him: not a literal child of the flesh (although that is also worthwhile, no doubt), but a child of the mind, such as a book or philosophical teaching. The great philosopher himself has lived longer than most by virtue of his intellectual children. I'm sure if he could have uploaded his brain into a computer, rather than having to rely on the imperfect method of imposing his memes upon his pupils and successors, he would have achieved even more.

That, then, is a tempting prospect. Even if I do not feel confident that my awareness will transfer into a computer at the moment of my death, then at least having a full, working copy of my mind will mean that the important products of my experience will not be lost, and can be recorded. My intelligence can go on creating artistic representations of the hard-learned lessons of my life, sharing my feelings, continuing my modest mission to increase humanity's understanding of itself (and of each other) and reduce the suffering and conflict in the world.

On the other hand, I should like to think that if I live a long and full life, then when I come to the end of it I'll be satisfied with my allotted time. I hope I shall feel that I have achieved all I could fairly achieve, that I have laughed enough, loved enough, both taken and offered enough happiness, and that I will be able to die without regrets and without fear. How pathetically afraid of death would I have to be to try and stay alive even in the most artificial and incomplete of ways, inhuman, untouchable, alone? (But if my life is not perfect, maybe I won't be so content and satisfied. Call no man happy until he has lived his full life and died.)

There is a sense in which it could, I suppose, be argued that I cease to exist every night, when my conscious memory shuts down and my brain shuffles things around between short- and long-term memory. The "I" who wakes in the morning has more or less all of my memories, and is neuro-physiologically identical to the "I" who went to sleep, and so he behaves precisely as I would, reacts in the same way to the same stimuli as I would, and so I trust him. I do not go to sleep every night in deadly fear of ceasing to exist, but with calm relief and relaxation. If I knew a similar copy of myself would reappear after my death, might I not lie down to die with similar happy resignation?

(3) Finally, and certainly not least significantly, is it not a form of immortality to leave an heir to carry on your unfinished work after you die? (Of course, in a sense a human life's work has to be let go: once it is released into the public domain it already does not belong to you, but to humanity. You can continue to work with it, but so can others, and if humanity is worthy of the gift, then you have to trust that they are also capable of taking the work onward without you.) But it is hard to let go completely: you would normally come to trust someone enough to take on this role through long association, after working together and talking, knowing that this person shares many of your dreams and beliefs; that she or he has a firm grasp of what you consider to be morally right, and the courage to abide by these convictions. In short, you come to trust that this person is sufficiently like you that they may be able to do a good enough job of continuing what you were unable to finish.

What could be better, then, than having as an heir a perfectly preserved homunculus of yourself? Even if you will not be there to supervise the zombie's work, you know that it will do pretty much—nay, exactly—what you would do. True, the uploaded mind will continue to experience, learn, and evolve on its own, and so eventually will no longer be like you as you are now, but it will in any case react to and be moulded by new experiences as you would react and evolve.

And if you cannot trust yourself, whom can you trust?

Home Current Back Issues Guidelines Contact About Fiction Artists Non-fiction Support Links Reviews News