Posted on 29 Jan 2022 by Steve Markham
Partly due to this post on r/SSC yesterday morning, I’ve been thinking about hypothetical scenarios like “what if five billion people would very much like to see me cut my arm off, such that the pleasure of their entertainment would outweigh the harm in pain and disability to me.” Often in discussions of morality we consider a hypothetical scenario to interrogate a moral framework or our own moral intuitions. One appeal of the hypothetical scenario is that it strips away a lot of details, hopefully making it clear what exactly we are trying to understand. It focuses the consideration on a specific decision or action, perhaps revealing that a particular meta-ethics is absurd.
Sometimes, this is a terrible approach.
Let’s take the hypothetical scenario above seriously. It is put forward as a sort of repugnant conclusion to argue against utilitarian consequentialism. Indeed it is absurd to assume you have a moral obligation to cut off your own arm for the entertainment of others, regardless of how many others or how entertaining they would find it! But I think the hypothetical has glossed over some important factors. I can imagine a world in which five billion people want me to cut off my arm. If I imagine that world in any level of detail, though, all sorts of questions come up. Do they want me in particular to cut off my arm, or would they be happy with anyone doing it? How exactly did they express this preference? How did I come to find out about it? These details might not matter ethically, but it is naive to simply assert that they don’t matter. Relevant XKCD.
It also seems naive to think that in general, it makes sense to imagine hypothetical scenarios where the world is largely unchanged, except this one little moral act we are considering. The Fat Man version of the Trolley Problem is a great example. People’s intuitions that it is wrong to push the fat man onto the tracks even though it is right to pull a level is because their intuitions developed in a world of uncertainty. If I spent a year or two living in a world where I could perfectly predict the consequences of my actions, my intuition would be that pushing the fat man was morally equivalent to throwing the switch. Since I don’t live in that world, my intuitions lean heavily against doing something that is almost certainly going to cause death even if it might also preserve life. Furthermore, I live in a world where pushing people to their death is punished, but throwing switches (or not) is totally legal. I cannot separate the morality of the trolley problem from the down-stream consequences’ moral value.
When I was growing up, my mother would sometimes avoid a question by answering, “I don’t answer hypothetical questions.” I don’t know why she said it, but I can imagine that it was partly driven by how difficult it is to specify all the conditions that make a hypothetical question relevant to the real world. If she set some precedent in the hypothetical world, her kids would inevitably claim that some future specific non-hypothetical scenario was just like the hypothetical one. By keeping the conversation grounded in reality, we couldn’t strip away details that would be relevant later.
One more thought. I think this is related to conversations about free will. Nobody thinks that if I choose hard enough I can fly like Superman. But it sure feels like if I chose hard enough I could waste less time on YouTube. Why the difference? Possibly, because it’s easy to imagine a world that is almost exactly like this one, including almost all of my personality traits and past behavior, except that I waste less time on YouTube. But it’s hard to imagine a world where I choose to fly, without lots and lots of other things also being different.