Scott Alexander, Population Ethics, and Playing the Philosophy Game
A response to Scott Alexander's recent discussion about population ethics in his book review of What We Owe The Future
I. Population Ethics Continued
A little over a week ago, I critiqued Scott Alexander’s article “Slightly Against Underpopulation Worries” with my article “In Favor of Underpopulation Worries.” I argued that although our actual world will be okay despite low fertility, the world could be much better with more people. From a total utilitarian perspective, potential people have lives worth living, and actualizing those lives is good as long as the good isn’t outweighed by other concerns like harm to other humans or animals. Since I believe the most coherent utilitarian framework is the total view, I commented to argue that low fertility actually is really bad from a utilitarian consequentialist framework.
From a consequentialist perspective, it seems like we should take the non-existence of people seriously. If we could boost fertility globally and achieve 20 billion in 2100 instead of 10 billion, we are going to see a lot more human welfare. Failing to do this should be treated similar to a persistent disease that's killing literally billions of people. Low fertility must be regarded as one of the worst things in the world at present if we regard future possible people as having equal moral worth. If all that matters is the consequence, lowering fertility should be treated like a massive wave of tens of millions of infant deaths. This should be extremely concerning.
I don’t wholly endorse this view. I am not a utilitarian. My metaethical stance is ethical intuitionism, which “holds that moral properties are objective and irreducible” and “at least some moral truths are known intuitively” (Huemer, 2005, p. 6). I believe this results in an ethical system that is not easily defined by axioms.1 Despite disagreeing with utilitarians, I think that consistent utilitarians should be total utilitarians. Scott Alexander disagreed with my concern about low fertility and rejected the total view in his response to my comment.
I reject this whole line of thinking in order to avoid https://en.wikipedia.org/wiki/Mere_addition_paradox . I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox. I'll just nod my head and say "Yes, I guess that sized civilization MIGHT be nice."
This response felt lacking. I responded with follow-up questions but did not receive a response. Once again, I do not blame Alexander for not taking the time to respond to me. I’m sure he’s busy and under no obligation to respond to everyone who comments on his blog. However, Alexander’s refusal to specify his position seems like a way of avoiding criticism. And describing my potential responses as “traps” seems like a way of justifying this avoidance. Since I couldn’t critique his scaling function exactly, I decided to address a bunch of possible arguments in my article, aided by the help of Chappell et al. (2022) and Huemer (2008). The indifference between population sizes seems especially absurd (see V. Indifference Between Population Size Constrained by Coolness Can’t Be Right). The possible scaling functions seem to fail with even more counter-intuitive conclusions (see VI. Accept the Repugnant Conclusion).
II. Alexander’s True Position
Just a few weeks later, Alexander specifies his actual views on population ethics in his article “Book Review: What We Owe The Future.” This allows me to provide a more exact response. His position is actually a bit more complicated than the potential ethical viewpoints that I critiqued. In response to twenty-nine philosophers signing a statement saying accepting The Repugnant Conclusion is not a decisive reason to reject an ethical theory, he says:
I hate to disagree with twenty-nine philosophers, but I have never found any of this convincing. Just don’t create new people! I agree it’s slightly awkward to have to say creating new happy people isn’t morally praiseworthy, but it’s only a minor deviation from my intuitions, and accepting any of these muggings is much worse.
If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.
It might be the case that Alexander believes that a choice could be both good and not praiseworthy or that there are actions that are good, but we should not do them. But this would be an odd way to talk, and it would make it really unclear what his position is. I am going to assume that Alexander is speaking in a straightforward manner here.
I interpret “[j]ust don’t create new people” to mean that creating new people is morally bad. At best, it could be neutral. Soon after, he says, “creating new happy people isn’t morally praiseworthy,” which indicates that it is either neutral or bad to create happy people. When he presses himself to play the philosophy game, he doesn’t assign a value to happy lives being created but assigns neutral or bad to lives below average. It seems like he thinks creating above-average happy people should be bad, but this is odd given that he thinks below-average lives can be neutral. I am not really sure what his exact position is. I am going to critique different possible interpretations. Let’s first assume his position is:
Creating above-average happy people is bad.
Creating below-average happy people is worse or neutral.
Creating below-zero people is even worse and always bad.
While this seems to be what Alexander is suggesting, I am not sure he holds this position. It would result in an incredibly bad conclusion, worse than the Repugnant Conclusion, The Sadistic Conclusion, and The Strong Sadistic Conclusion (see Huemer, 2008). We could call this the Extremely Sadistic Conclusion:
For any large world of people experiencing undending hell-like suffering, there is a large world of people experiencing unending heaven-like bliss which is worse, provided the heaven-like world is large enough.
Perhaps Alexander meant that creating above-average happy people is actually morally neutral. While he says to just not create people, perhaps when he says “not praiseworthy,” he means morally neutral. This would mean that his position is:
Creating above-average happy people is neutral.
Creating below-average happy people is either bad or neutral
Creating below-zero people is worse and always bad.
If above-average levels of happiness are set at neutral, then it can result in indifference to more happiness. Since he just says “neutral to slightly bad” for creating below-average people, it’s not clear exactly where the line is for when it becomes neutral. It seems to be somewhere in between always and never neutral. If creating below-average happy people is always neutral, this view is called procreative asymmetry which holds that “[i]t is bad to create people with negative wellbeing, but not good to create people with positive wellbeing, all else equal” (Chappell et al., 2022). Alexander’s position would add that it is “sometimes bad to create people with below-average wellbeing.” This would suffer the same problem that procreative asymmetry has. Imagine two above-average populations could be added to the world.
In this scenario, you would have to be indifferent between A1 and A2. You would not actually care which population gets added into the world because adding happy people is morally neutral. A real-world implication of this would be that if you were performing IVF and had the choice to implant a healthy embryo or a very healthy embryo, you would be indifferent between the two if you expected them to live above-average lives. This seems wrong. We ought to care about the health and well-being of potential people. Another stronger interpretation of Alexander’s position would be:
Creating above-average happy people is good.
Creating below-average happy people is either bad or neutral
Creating below-zero people is even worse and always bad.
Under this arrangement, Alexander’s position is a variation on the critical level view, which holds that “[a]dding an individual makes an outcome better to the extent that their wellbeing exceeds some critical level” (Chappell et al., 2022). His view is unusual in setting the critical level at the average. This means that creating a bunch of people with above-average utility increases the average and thus the critical level. It seems necessary for Alexander to make the distinction as to whether it is morally bad to produce people whose lives are above the critical value at the time of their birth or throughout their entire life since the critical level will change.
To illustrate what I mean, imagine a world with one exactly neutral person. Now you create one person who is happy. If you add another person that is extremely happy, this will make the happy person below average. But it seems unclear as to whether we should retrospectively judge the first happy person as not having been worth creating. At the time of creation, they were above the average of zero. However, at some point in their life, they were below the average. This is very important considering it generally seems that life is improving over time. While creating a human in 1600 could be regarded as morally good then, it's likely that tons of those lives were below average for 2022 standards. I take Alexander’s position to be that the critical level is established at birth. I don’t think he means you have to guess what the critical level will be throughout a person’s life.
Implications For An Empty World
Imagine that nobody existed. One commenter mentioned that this results in the average being zero utility over zero people. This is undefined. If nobody exists, it’s not clear that we have moral reasons for producing people. This is because we don’t know if it’s good or bad since we don’t know the average. Imagine we were facing eminent existential risk and could program superintelligent robots to genetically reengineer humans after a catastrophic species-ending event. Or maybe we could be cryogenically frozen after our deaths and thawed in thousands of years. It would be unclear whether we should do this, even if we could colonize the stars and produce trillions of happy people after being recreated.
It could be that Alexander assigns zero to the average value when nobody exists. In the empty world case, we are once again faced with the Repugnant Conclusion because the critical value is assigned to zero. “[T]he total view of population ethics is simply a critical level theory with a critical level of zero” (Chappell et al., 2022). Given a choice between population A (high utility, small population) and population Z (very low utility, very large population, higher aggregate utility), we should select Z. But this is one of the very conclusions that Alexander wanted to avoid in the first place.
If the average value is considered at the time of creation, this view would also result in timing paradoxes in an empty world in which the order of producing populations could drastically change ethical choices. Imagine we had population A with 101% average utility and a very large population B with 200% average utility. Imagine population B would change the population average to 110% when added. Let it be the case that one population is created 1 second before the other. If A comes first, then B, it's morally good to add A. If B comes first, then A, it's morally bad to add A. The mere 1-second delay creates a very different evaluation, but almost the exact same world. Can this be a reasonable implication if morality lives in the world (see Alexander, 2011, §2.1)?
Consider four hypothetical populations A, B, C, and D. A is an extremely happy singular person. B is a very happy small population. C is a large happy population. D is an extremely large population of lives barely worth living. E is a tortured population. The populations are all going to be created one second apart. You get to choose the order. Under Alexander’s thinking, the order of the population is extraordinarily important from a moral point of view. You would have to do E, D, C, B, and finally, A. If you did the opposite order—A, B, C, D, and finally E—it would be significantly worse morally. Does it really seem reasonable to think that the order is tremendously important when it’s merely one second between the creation of the populations?
Now, imagine the time getting infinitesimally smaller and smaller. First, it is a half second, then a fourth, then a tenth, then a hundredth, and so forth. The order remains extraordinarily important until the exact moment when it converges. After this point, it doesn’t matter anymore because you are introducing the populations at the same time. It seems wrong to think that milliseconds could make a scenario significantly less moral when the outcome is almost identical, especially for a consequentialist.
Implications For The Current World
Consider the current world. Imagine you want to make a second world in a distant galaxy. You are choosing between a small population of tormented people and a large population of below-average people. Alexander’s ethical system would result in The Strong Sadistic Conclusion: “For any world full of tormented people, a world full of people with lives barely worth living would be worse, provided that the latter world contained enough people” (Huemer, 2008). Even in our current world, it would be better to introduce millions of tormented people into the world rather than a sufficiently larger number of slightly below-average people.
Under this framework, it is more ethical to create one ever-so-slightly above-average person than tons of ever-so-slightly below-average people, even if they fully believe their lives are good and worth living. And even if they would’ve been above average just a few years ago. Whether it is moral to have a child of a fixed welfare level would be changing all the time.
Even more strange than the ethics of having children being highly contingent on the welfare level of the present year, the goodness or badness of having a child is heavily dependent on the existence of "persons" on other planets. If these persons have incredibly good lives, it might be immoral to have any humans. If these persons have incredibly bad lives, it might result in the average welfare being dragged below zero. If the critical value is set at zero, then we face the Repugnant Conclusion again. If a catastrophic event occurred somewhere in a distant galaxy eliminating a civilization, the ethics of producing children could’ve drastically changed without us even knowing.
There is a similar objection to the variable values viewpoint called the Egyptology objection. From this view, “the value that a person’s life adds to the world can depend upon how numerous and/or how prosperous are the members of some remote society that has no interaction with the person in question” and “how much reason I now have to produce children depends in part on how happy the ancient Egyptians were” (Parfit 1984, p. 420). It seems absurd to think egyptology could reveal facts about population ethics. You could raise a similar sort of “astrobiology objection.” Under Alexander’s viewpoint, searching for extraterrestrial life and determining the welfare of distant alien persons could reveal insight into whether you should have children.
Even if there were no aliens, producing children is highly contingent on what you count as persons and what you think the welfare of potential persons looks like. For example, spontaneous miscarriages and abortions outnumber adult human deaths considerably (Huemer, 2019). The average could be dragged down if we believe that fetuses and embryos are persons and their lives are close to neutral. If you object that many fetuses and embryos lack consciousness, then perhaps you should assign personhood to conscious beings rather than just humans.
If you take seriously the idea that animals are "persons," then you could argue they suffer so much and are so numerous that the average is below zero, taking us back to the Repugnant Conclusion. Human suffering is potentially overshadowed by the billions of animals suffering on farms, which is potentially overshadowed by billions of animals suffering in the wild. It could perhaps be the case that wild animals are not mostly suffering, and their lives are actually more good than bad. It is hard to say, but you’re moral calculation is going to be heavily dependent on the suffering or pleasure of wild animals if you regard animals as persons.
You can’t merely reject the idea that we should consider animal welfare when considering possible worlds. Otherwise, we would be indifferent to adding a trillion tortured dogs into a remote barren world. Vegans take serious issues with the existence of factory-farmed animals. About everybody takes issue with gratuitously torturing dogs. If you include them in the numerator of your average, shouldn’t you include them in the denominator as well? If we are adding all animals that plausibly have any conscious experiences whatsoever, then we should probably include nematodes which are estimated to outnumber humans at a ratio of around 57 billion to one (van den Hoogen et al., 2019). Their experience is probably close to neutral, and so the average is likely dragged to around zero again, resulting in the Repugnant Conclusion.
Consider a final scenario. You are provided with the opportunity to produce 100 trillion happy people on a distant planet. These people will experience x̄ + α happiness, where x̄ is what you believe to be the average in the current world, and α is a positive value. Imagine x̄ is positive. Say you create the world and rejoice in performing the greatest moral action in the history of the universe. As you are celebrating, your utilitarian calculator comes in and says that the team has made an error. Our current world is actually much happier than we thought! It’s even far beyond x̄ + α! After hearing that the world is a happier place than you expected, you realize you haven’t done something incredibly good. You’ve actually performed the worst action ever! You’ve produced an inordinate amount of negative moral value because those 100 trillion happy people were actually below the current world’s level. I think a normal response is to feel happy that our world is better than we thought rather than to feel upset about it. This framework would make us want to wish our world was worse in certain contexts.
III. When Intuitions and Implications Clash
One of my main issues with these sorts of modifications is that they feel incredibly ad hoc. Rather than coming from first principles, Alexander is constructing a position in order to avoid unwanted implications rather than just accepting the implications. It is okay to modify your theory in the face of contradictory evidence, but these ad hoc modifications seem to directly clash with the very first principles which lead to the belief in utilitarianism itself.
In about every single other circumstance, when presented with scenario A and scenario B, the utilitarian will choose the scenario which maximizes wellbeing. They are willing to disregard deontological concerns like natural rights, parental duties, the inherent value of truth, and the omission-commission distinction. Many fully embrace sentientism, the idea that “if you can experience pleasure and suffering, you count as a “person” for ethical purposes, even if you’re a farm animal or a digital person or a reinforcement learner” (Karnofsky, 2022).
However, when presented with difficult scenarios concerning populations, they no longer want to maximize utility, and they introduce other moral considerations like the size of the human population, the distribution of utility among humans, the average of the human population, and so forth. The concern is no longer raw aggregate utility with indifference to its source. As an intuitionist who accepts all sorts of moral considerations, this makes me wonder why we should not introduce all the other deontological ethical concerns. If we can make tradeoffs that sacrifice utility for other moral considerations like satisfying intuitions about just distributions of utility, we should be willing to sacrifice utility for natural rights.
It is often the case that intuitions and the implications of axiomatic theories are in conflict, whether it be utilitarianism or libertarian natural rights (Panickssery, 2022; Huemer, 2019; Huemer, 2022; Parrhesia, 2021a; Parrhesia, 2021b; Parrhesia, 2022a; Parrhesia, 2022b; Parrhesia, 2022c). If intuitions count as evidence, intuitions that clash with axiomatic theories ought to lower one’s credence in the axiomatic theories or even lead to outright rejection (Parrhesia, 2022a). If a theory and intuitions are contradicting each other, then at least one is false. Some people just bite the bullet and accept the implications like Bentham’s Bulldog. Michael Huemer, Bryan Caplan, and I reject axiomatic ethics. Bentham’s position seems consistent to me, and so does Huemer’s. Inbetween positions contain tensions. Which option does Scott Alexander choose?
IV. Alexander’s Choice
Scott Alexander describes these sorts of analyses of moral axioms as “Counterfactual Muggings.” When you hold axiomatic views, people can present wild implications of those views. I agree with this point, but I don't particularly appreciate how he presents this idea.
There’s a moral-philosophy-adjacent thought experiment called the Counterfactual Mugging. It doesn’t feature in What We Owe The Future. But I think about it a lot, because every interaction with moral philosophers feels like a counterfactual mugging.
You’re walking along, minding your own business, when the philosopher jumps out from the bushes. “Give me your wallet!” You notice he doesn’t have a gun, so you refuse. “Do you think drowning kittens is worse than petting them?” the philosopher asks. You guardedly agree this is true. “I can prove that if you accept the two premises that you shouldn’t give me your wallet right now and that drowning kittens is worse than petting them, then you are morally obligated to allocate all value in the world to geese.” The philosopher walks you through the proof. It seems solid. You can either give the philosopher your wallet, drown kittens, allocate all value in the world to geese, or admit that logic is fake and Bertrand Russell was a witch.
I appreciate that Alexander takes the opportunity to give funny examples, but unfortunately, that can be used to subtly mislead. Making this a mugging and totally incoherent makes it look like the philosopher is a crazy person rather than a thoughtful philosopher. Counterfactual muggings seem ridiculous when described this way, but that’s because Alexander not only made this thought experiment a total non sequitur, but he also introduced the philosopher as a mugger.
Objections to critiques are bolstered by the use of loaded language. For example, Alexander said he wouldn’t describe his scaling function lest I “trap” him. he describes these philosophy critiques as “muggings.” Applied Divinity Studies did something similar by analogizing arguments against utilitarianism to punishment and “clubs beating you over the head."And he associated anti-utilitarians with “punching utilitarians in the face.” To ADS’ credit, he acknowledged this and said he would take my objection to heart. I think this language makes people critiquing axioms seem nefarious or deceptive in some way. We are often just pointing out the true implications of one’s own beliefs.
Scott Alexander calls these philosophical examinations “games,” which seems to hint at an unseriousness. I’m being pretty critical of the language used here, but it’s because it can make a position look more persuasive than it is. Even if it’s okay to use this sort of rhetoric, it’s fine to point out how this can subtly influence people’s perceptions. I suppose even if Alexander did find my above critiques of his population ethic persuasive, he might just say that he doesn’t want to engage in this sort of philosophizing.
But I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.
I realize this is “anti-intellectual” and “defeating the entire point of philosophy”. If you want to complain, you can find me in World A, along with my 4,999,999,999 blissfully happy friends.
Of course, it seems reasonable not to want your eyeballs pecked out by a seagull. Alexander is taking a viscerally revolting attack on one of the most vulnerable pieces of your body and contrasting it with an unspecified philosophical concern. Again, it seems like a total non sequitur and unreasonable. Alexander is highlighting the downside and not even mentioning the upside. He’s also making it personal, which distorts moral thinking even more.
Imagine I described the trolley problem and said, “Will MacAskill has a really good argument for why you should be run over by this train coming at you. Do you think it’s reasonable to want to get run over?” And the response would be, “Of course not! That’s ridiculous.” And it is ridiculous. You don’t have any conception of why you’re being sacrificed, just like Alexander doesn’t explain why his eyeballs are being pecked out other than MacAskill has a good proof.
When pressed to logically extend his belief to infinity, Alexander says he will “just refuse to extend things to infinity.” He can just pick the world that he likes. “I like that one!” When asked to make a choice, he can act according to his own personal preference or select the world that benefits him where he doesn’t lose his eyes to a seagull. It is hard to evaluate this as anything other than “I’m going to do what I want.” It is entirely “anti-intellectual.”
I am surprised to see this attitude coming from Scott Alexander who was definitely willing to make philosophical arguments in the past for consequentialist-type ethics like in “The Consequentialism FAQ,” where he employs the use of thought experiments. Just one day after publishing his book review of What We Owe the Future, Alexander criticizes people for critiquing Effective Altruism but not actually donating money in his article “Effective Altruism As A Tower of Assumptions.” Does this response of “I do what I want” work here as well? What moral justification can Alexander provide for doing anything? I can merely respond with, “I know this makes me anti-intellectual, but I’m going to reject your philosophizing, lest I lose my eyes to a seagull.”
This feels like Alexander rides the consequentialist train to a happier society but hops off wherever he wants. Rather than saying that the train isn’t going in the right direction, he just says that he’s picking the stop he wants. My stance is to either admit that the train is going in the wrong direction or stay on the train. You can even join Huemer, Caplan, and me on the intuitionist train.2 What you shouldn’t do is reject consistency and philosophizing itself, especially when you philosophize frequently in your writing.
Alexander realizes this is “leaving some utility on the table,” but he is “willing to make that sacrifice.” But the sacrifice isn’t his. It’s a sacrifice from others for his own interests. And it’s a sacrifice to his philosophical consistency. Scott Alexander is a very good person and a brilliant writer. But I am going to conclude with this quote to emphasize my point. In “Links For June,” Scott Alexander wrote the following (bolding is mine):
Richard Hanania: Why Do I Hate Pronouns More Than Genocide? A conservative intellectual discusses why he works on fighting wokeness instead of more pressing problems. Everyone has praised this piece for its honesty, and I agree it is commendably honest. But I also feel like - and I mean this in the most respectful way - it basically cashes out to “because I am a bad person”. I’m not saying this just because he doesn’t spend enough time fighting genocide - obviously we all could do better on this. But he concludes that his personal aesthetic is anti-woke, and that he would fight for that aesthetic even if wokeness “would lead to a happier and healthier society”. My thoughts on this are more complicated than can fit in a link summary paragraph, but I do think the concept of “fight for your own preferences even if they would make society worse” is pretty close to the concept of “bad person” (though with a lot of fuzziness around the edges). In fairness to Richard, he claims that this is only a hypothetical and that in fact he thinks his preferences would make society better. But it’s a heck of a hypothetical. Anyway, worth reading, if only for the questions it raises.
Edit: Ethical intuitionism is a metaethical stance. Some believe this results in an axiomatic ethical system such as utilitarianism. Others, such as Michael Huemer, believe this results in “common sense ethics.” Huemer believes in rights but does not believe that they are absolute and inviolable. I agree with this stance.
Edit: The intuitionist and weak deontology train to be specific.
It seems to me Alexander is doing roughly the same thing I accuse Rothbard of in my libertarian moral psychology post (https://chefstamos.substack.com/p/towards-a-better-libertarianism), i.e., starting from an ethical theory that seems prima facie sensible to him and then just straight up refusing to engage seriously with its implications or philosophical objections. Which is fine if someone is doing ethics for their own personal benefit, but I'm not sure why I should care what they think about ethics if they write off obvious objections as "philosophical muggings" (I'm sure Rothbard would have appreciated that term).
Nice essay. Admittedly, I have trouble getting past "How are you even measuring all these people's utility, much less adding it up and doing summary statistics on it, then figuring out how your actions will impact those numbers", but your examples helped keep interest.
I also applaud you for addressing Scott's (and Hanania's) rather repugnant recent tendency towards "Hell with it, I do what I want, but will keep calling it morally right!" It is one thing to say "Look, I don't know what the big picture effects are, but I am going to try and do what seems best within arms reach of me anyway," and quite another to say "I have this extremely strong sense that I can accurately decide whether it is better to add or remove a few million people in the future, but I don't feel the need to follow that process to its logical conclusions if I don't like them." Apparently all the cool kids are just arbitrarily deciding on what it is to be good and refusing to ever examine those decisions. Very disappointing.