What counts as death?
When imagining a world of digital people - as in some of the utopia links from last week (as well as my digital people sketches from a while ago) - it's common to bump into some classic questions in philosophy of personal identity, like:
- Would a duplicate of you be "you?"
- If you got physically destroyed and replaced with an exact duplicate of yourself, did you die? (This question could connect directly to whether "converting yourself to a digital person" is equivalent to dying.)
My answers are "sort of" and "no." My philosophy on "what counts as death" is simple, though unconventional, and it seems to resolve most otherwise mind-bending paradoxical thought experiments about personal identity. It is the same basic idea as the one advanced by Derek Parfit in Reasons and Persons;1 Parfit also claims it is similar to Buddha's view2 (so it's got that going for it).
I haven't been able to find a simple, compact statement of this philosophy, and I think I can lay it out in about a page. So here it is, presented simply and without much in the way of caveats (this is "how things feel to me" rather than "something I'm confident in regardless of others' opinions"):
Constant replacement. In an important sense, I stop existing and am replaced by a new person each moment (second or minute or whatever).
The sense in which it feels like I "continue to exist, as one unified thread through time" is just an illusion, created by the fact that I have memories of my past. The only thing that is truly "me" is this moment; next moment, it will be someone else.
Kinship with past and future selves. My future self is a different person from me, but he has an awful lot in common with me: personality, relationships, ongoing projects, and more. Things like my relationships and projects are most of what give my current moment meaning, so it's very important to me whether my future selves are around to continue them.
So although my future self is a different person, I care about him a lot, for the same sorts of reasons I care about friends and loved ones (and their future selves).3
If I were to "die" in the common-usage (e.g., medical) sense, that would be bad for all those future selves that I care about a lot.4
(I do of course refer to past and future Holdens in the first person. When I refer to someone as "me," that means that they are a past or future self, which generally means that they have an awful lot in common with me. But in a deeper philosophical sense, my past and future selves are other people.)
And that's all. I'm constantly being replaced by other Holdens, and I care about the other Holdens, and that's all that's going on.
- I don't care how quickly the cells in my body die and get replaced (if it were once per second, that wouldn't bother me). My self is already getting replaced all the time, and replacing my cells wouldn't add anything to that.
- I don't care about "continuity of consciousness" (if I were constantly losing consciousness while all my cells got replaced, that wouldn't bother me).
- If you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It'd be chill.
- If you made a bunch of copies of me, I would be all of them in one sense (I care about them a lot, in the same way that I normally care about future selves) and none of them in another sense (just as I am not my future selves).
- If you did something really weird like splitting my brain in half and combining each half with someone else's brain, that would create two people that I care about more than a stranger and less than "Holden an hour from now."
- I don't really find any thought experiments on this topic trippy or mind bending. They're all just cases where I get replaced with some other people who have some things in common with me, and that's already happening all the time.
Pros and cons of this view
(This isn't going to feel very balanced, because this view "feels right" to me, but if I get good additional cons in the comments I might run them in a future post.)
The main con I see is that "constant replacement" is a pretty unusual way of thinking about things. I think many people think they would find it kind of horrifying to imagine that they wink out of existence every second and get replaced by someone else.
To those people, though, I would suggest "trying it on": try to imagine, for let's say a full week, that you're fully convinced of constant replacement, and see whether it feels as impossible to live with as it seems at first. You might initially expect to find yourself constantly terrified of your impending death, but my guess is you won't be able to keep that up, and you'll soon be feeling and acting pretty normal. You won't make any weird decisions, because "concern for future selves" provides pretty much the same functional value as "concern for oneself" in normal circumstances (I just think it works better in exotic circumstances).
If that's right, "constant replacement" could join a number of other ideas that feel so radically alien (for many) that they must be "impossible to live with," but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing questions and situations better.)
As for the pros:
- Having sat with it a while, the view now feels very intuitive to me.
- Constant replacement isn't some novel or radical idea. E.g., it's similar to the idea that now is all there ever is. (And as noted above, Derek Parfit claims that Buddha took a similar view.) A lot of people live in this headspace.
- Constant replacement seems sort of obviously true when I think about my relationship to my far-past self: the me of 10 years ago really feels like a different person that I happen to have memories of. And the me of 10 years from now is probably the same kind of deal. So my relationship to the me of 1 minute from now should be qualitatively the same kind of thing, just much less so, and that seems about right.
- Once you accept constant replacement, the rest of the view seems like common sense.
- To be clear, this isn't always how I've thought. I used to stare at some random object and think "Is this moment of me staring at this object the only me that has ever existed? (How would I know if it weren't?)" and feel sort of freaked out. But at a certain point I just started answering "Yeah" and it started feeling correct, and chill.
- It seems good that when I think about questions like "Would situation __ count as dying?", I don't have to give answers that are dependent on stuff like how fast the atoms in my body turn over - stuff I have basically never thought about and that doesn't feel deeply relevant to what I care about. Instead, when I think about whether I'd be comfortable with something like teleportation, I find myself thinking about things I actually do care about, like my life projects and relationships, and the future interactions between me and the world.
- All of the paradoxical thought experiments about teleportation, brain transplants, etc. stop feeling confusing or mind-bending. I feel like I could make sense of things even in a potential radically unfamiliar future.
- I probably don't have the same kind of fear of death that most people have. I figure my identity has already changed dramatically enough to count as most of the way toward death at least a few times so far, so it doesn't feel like a totally unprecedented thing that's going to happen to me.
Anyway, if you think this is crazy, have at it in the comments.
Footnotes
-
For key quotes from Reasons and Persons, see pages 223-224; 251; 279-282; 284-285; 292; 340-341. For explanations of "psychological continuity" and "psychological connectedness" (which Parfit frequently uses in discussing what matters for what counts as death), see page 206.
"Psychological connectedness" is a fairly general idea that seems consistent with what I say here; "psychological continuity" is a more specific idea that is less important on my view (though also see pages 288-289, where Parfit appears to equivocate on how much, and how, psychological continuity matters). ↩
-
"As Appendix J shows, Buddha would have agreed. The Reductionist View [the view Parfit defends] is not merely part of one cultural tradition. It may be, as I have claimed, the true view about all people at all times." Reasons and Persons page 273. Emphasis in original. ↩
-
There's the additional matter that he's held responsible for my actions, which makes sense if only because my actions are predictive of his actions. ↩
-
I don't personally care all that much about these future selves' getting to "exist," as an end in itself. I care more about the fact that their disappearance would mean the end of the stories, projects, relationships, etc. that I'm in. But you could easily take my view of personal identity while caring a lot intrinsically about whether your future selves get to exist. ↩