In Favor of Underpopulation Worries
A response to Scott Alexander's "Slightly Against Underpopulation Worries"
I. Scott Alexander on Underpopulation
Rationalist blogger and psychiatrist Scott Alexander of Astral Codex Ten recently wrote an article entitled “Slightly Against Underpopulation Worries” where he addressed a number of concerns about not having enough people. His overall points are summarized in the headings. You can click through them to read his exact argument. My very brief responses to his arguments are below. I’ll say that I’m in agreement with him on many of these points, but I disagree with his resulting conclusion. I explain my reasons in the next section.
Declining Birth Rates Won’t Drive Humans Extinct, Come On
Alexander began by rejecting the idea that “underpopulation could cause human extinction,” regarding it as “100% false.” One possible way this could be the case is that while there are many existential threats that are increased with a larger population, there are likely existential threats to humankind that are actually reduced with a larger population rather than increased. If the world population was merely 10 million, we may lack the resources and technology to stop threats like asteroids and deadly pathogens, or we may never escape the earth to colonize the galaxy before something ends the human race.
Immigrant-Friendly Countries Will Keep Growing & Countries With Low Immigration Will Shrink, But Mostly Slowly & Big Relative Drops Still Imply High Absolute Populations
I think that Alexander’s argument that countries that accept immigrants will continue to grow is true and that low immigration countries will decline is likely accurate. I’m not certain as to how confident we can be about these sorts of estimates, but I have no reasonable criteria to critique them on other than “predicting the future is hard.” Large countries will remain large for some time, but the main issue will be that the rate of growth has declined and the potential number could be much larger.
Concerns About “Underpopulation” Make More Sense As Being About Demographic Shift
Alexander says that demographics are going to change as a result of declining birth rates in high-immigration countries. He doesn’t think “it’s racist to care about ethnic demographic shift.” I don’t know if it’s racist or not because I think that term is very ambiguous. It also has the connotation that being racist is never okay. I’m tempted to say it’s racist by virtue of the fact that someone is caring about race and wanting to exclude others on the basis of race. But I also think that doesn’t make someone a bad person. Most people on earth would probably care about massive ethnic shifts in their native population.
Age Pyramid Concerns Are Real, But Not Compatible With Technological Unemployment Concerns
I’m not so worried about technological unemployment. I do think that more workers making more goods makes people’s lives better, and it’s a shame to have less productive people making the economy grow. Also, older people benefit from younger people maintaining a large economy and caring for them.
Dysgenics Is Real But Pretty Slow & Innovation Concerns Are Real But Probably Overwhelmed By Other Factors
The negative correlation between cognitive ability and fertility is real. This is probably one of the most concerning points, and it’s possible that this results in greater existential risk and civilizational decline since higher cognitive ability is associated with good outcomes on the societal level. The wildcard about predicting the future of these trends is that we are likely going to be able to use genetic engineering to acquire massive gains in cognitive ability relatively soon. How exactly this plays out is an interesting question, which I addressed in my article “America in 2072: A Society Stratified by Genetic Enhancement.”
As for innovation concerns, I’m not really sure about the theory of ideas getting harder to find. My vague impression is that trying to measure the speed of science seems rather dubious. I haven’t investigated these articles and arguments in depth, nor can I really evaluate the 10x claims.
In The Short-To-Medium Run, We’re All Dead
Alexander suspects that we will either have artificial general intelligence or that genetic engineering will make people extremely intelligent. Alexander argues that since things are going to be so different because of revolutionary technology “[we] make a mistake by thinking about it at all.”
II. Why Underpopulation is a Huge Issue
Let’s go a step further in terms of optimism and say that we can know what’s going to happen in the year 2100, and we actually have extreme confidence that everything is going to be okay. Imagine we know with absolute certainty that we won’t all be dead, life will be pretty good, and we’ll achieve the population growth estimate of 10.9 billion people. That’s wonderful. Should we still be concerned about underpopulation? I think yes.
I think many will find Alexander’s article persuasive in the sense that future society will likely not be so bad provided something catastrophic doesn’t happen. I think that he was persuasive on this point. However, I think the strongest argument for underpopulation worries is that human life is good and getting better all the time. If we want to increase net human happiness, then a great way of doing this is increasing the number of people who exist provided we think they’ll live mostly happy lives. I made this argument in the comment section, then edited it to clarify I was arguing from the total view:
From a consequentialist perspective, it seems like we should take the non-existence of people seriously. If we could boost fertility globally and achieve 20 billion in 2100 instead of 10 billion, we are going to see a lot more human welfare. Failing to do this should be treated similar to a persistent disease that's killing literally billions of people. Low fertility must be regarded as one of the worst things in the world at present if we regard future possible people as having equal moral worth. If all that matters is the consequence, lowering fertility should be treated like a massive wave of tens of millions of infant deaths. This should be extremely concerning.
To clarify, I don’t fully endorse this view, but I think a consistent consequentialist should. If future people matter, then the non-existence of possible people seems like a concern. However, Alexander doesn’t seem to share this concern about the massive potential of possible people. He responded to my comment:
I reject this whole line of thinking in order to avoid https://en.wikipedia.org/wiki/Mere_addition_paradox . I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox. I'll just nod my head and say "Yes, I guess that sized civilization MIGHT be nice."
There is a lot to be said about this response, but I will need to start with an explanation of the idea of the Mere Addition Paradox, which is often referred to as the Repugnant Conclusion. The RC is “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit, 1984). This means that we would pick the Z population in the image below provided there are enough people.
On a separate note, I don’t think it’s fair to call criticizing theories by examining their seemingly paradoxical implications in a straightforward manner a trap. This reminds me of Applied Divinity Studies’ recent article called “Punching Utilitarians in the Face” where he analogize arguments against utilitarianism to punishment and “clubs beating you over the head," and associates anti-utilitarians with “punching utilitarians in the face.” I think utilitarians use this language perhaps unintentionally because these criticisms are annoying and disagreement seems inherently combative, especially on the internet. To be fair to ADS, when I pointed it out, he said “Thanks for the feedback on the violent metaphors, it's possible I got carried away and I will take this comment to heart.” I have much respect for that response. This is a pretty minor point, but worth mentioning if it subtly colors someone’s impression of an interaction.
If someone states a theory, I think it’s fair game to start coming up with counter-examples or asking for specifications. I responded with some follow-up questions but didn’t receive a response. I don’t really blame Alexander. He doesn’t have time to respond to every critique on the internet, nor does he have any sort of obligation to respond. That being said, not only does he intentionally not tell me whether he is indifferent between population sizes provided they are “large enough to be interesting and do cool stuff,” but also says that if it’s possible he has a scaling function, he’s not going to tell me what it is. This feels like an inadequate response and makes it hard to critique him. That’s unfortunate because the conclusion of the article is probably wrong if the RC is correct.
My next point is that trying to avoid the RC seems like weighing one particular intuition too heavily. We have all sorts of intuitions about ethical behavior, but utilitarians usually just accept the intuitions used to construct utilitarianism as legitimate and reject others as distortions in thinking from our culture or evolutionarily-adaptive instincts. It is odd to me that saying “but this leads to the RC” is sufficient enough a point in many cases while saying something like “that leads to a right’s violation” is not sufficient. For some reason, the RC is sufficient enough to make people want to totally rework their beliefs about ethics. Oddly enough, I think rights violations are legitimate intuitions while opposition to the RC is a result of distortion. Although I don’t think my original argument requires accepting the RC, I will also defend the RC for good measure.
III. Why Fertility Concerns Don’t Entail the Repugnant Conclusion
I have to disagree that if we regard boosting fertility as extremely important morally, we have to accept the RC necessarily. I don’t think that we are at a point in time where having more children means that average welfare is reduced. It’s not even clear to me that average welfare isn’t increased by more people specializing in economically productive tasks. Think about all the incredibly cool and interesting stuff we wouldn’t have if the population on earth was just 10 million. It’s reasonable to believe that life would actually be worse on average if we kept population growth constrained.
It could also be the case that more people being born will increase the welfare of older generations and that it is a net good even if we regard possible people as worth nothing in our moral calculation. Sure, things might be okay even if the population doesn’t grow as fast, as Alexander argues, but the world could be so much better. Rather than this being a comparison between a population with lives barely worth living, it could be a comparison between a population with good lives and a population with even more good lives. With the rapid increases in technology that make life enjoyable and convenient, it’s plausible that having more children just means life gets better faster.
There has to be some ecological capacity, but I don’t think we’re near it. Life keeps getting better for future generations, not worse. You don’t have to accept all these assumptions, but it’s not obvious that wanting more babies means you have to accept the RC. We are very far from everyone on earth having lives barely worth living, and it is likely that the portion of people with lives barely worth living is going to decrease over the next century.
V. Indifference Between Population Size Constrained by Coolness Can’t Be Right
I don’t think that Alexander put a ton of thought into his response, but I doubt that he really is “equally happy with any sized human civilization large enough to be interesting and do cool stuff.” This would have all sorts of strange implications. Imagine a population of 10 million extremely happy people doing interesting stuff compared to a population of 10 billion extremely happy people doing interesting stuff. Isn’t the 10 billion clearly better? Imagine 10 million mildly happy people doing interesting stuff compared to 10 million extremely happy people doing interesting stuff. Can he really be indifferent?
Imagine 100 billion people living in unending bliss, but doing nothing interesting because they spend all day enjoying peace and quiet rather than struggling to survive or creating interesting stuff. Now, imagine a population of 10,000 scientists, artists, and musicians who are exceptionally interesting and cool. Could someone really prefer the 10,000 to the 100 billion? It seems more likely that Scott has a scaling function that he didn’t want to reveal. Since I don’t know what his function exactly is, I will critique a bunch of arguments against the RC on the basis of different population ethics systems.
VI. Accept the Repugnant Conclusion
In the figure comparing A and Z above, the Repugnant Conclusion is the idea that there exists a population Z that is large enough that it is better than A in terms of net welfare. Some utilitarians would accept this, namely, those who adopt the total view—the idea that “[o]ne outcome is better than another if and only if it contains greater total wellbeing” (Chappell et al., 2022). Since we stipulate that population Z has greater total wellbeing, a total utilitarian must prefer population Z to population A, despite this conclusion seeming repugnant. If given the choice between creating a world like A or a world like Z, the morally correct action for a total utilitarian would be to create Z.
The Average View
If you reject the idea that you should prefer Z, then you should not be a total utilitarian. You should be some other sort of utilitarian. The most obvious one that comes to mind is the average utilitarian who believes “[o]ne outcome is better than another if and only if it contains greater average wellbeing” (Chappell et al., 2022). This view holds that population A is better because it has a higher average utility. I think that this move is mistaken for two reasons: there are even more appalling conclusions that follow, and there isn’t a good reason to support this view but not consider a bunch of other factors other than utility.
The idea behind utilitarianism is that you have a parsimonious ethical system that values the only thing that matters, namely well-being. For a utilitarian, it doesn’t matter if actions are appropriately categorized as “murder” or “theft.” What truly matters is the harm or happiness produced from these actions—there is no reason to have any other consideration. It’s simple mathematics that avoids the arbitrariness of commonsense concerns and intuitions. If you want to incorporate intuitions about relative comparisons between populations, you are being led away from maximizing welfare and toward satisfying intuitions. Why not go full-blow intuitionist and accept concepts like a duty to family, weak natural rights, the distinction between omission and commission, and so forth?
My other point is that the implications of average utilitarianism are even worse than the RC. It’s hard for me to see how the repugnant conclusion is any worse than the sadistic conclusion—the idea that “[i]t can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point, all else equal” (Chappell et al., 2022). Introducing more suffering into the world for the sake of changing the statistical average seems like it cannot be correct. In an article on population ethics, Chappell et al. (2022) give the extreme implication that is a variant on Parfit’s Hell Three example (Parfit, 1984, p. 422):
First, consider a world inhabited by a single person enduring excruciating suffering. The average view entails that we could improve this world by creating a million new people whose lives were also filled with excruciating suffering if the suffering of the new people was ever-so-slightly less bad than the suffering of the original person.
In the original example from Parfit, the population is being tortured but is promised by their overlords that their children will be tortured slightly less. Under this scheme, it would be morally good to have children. This seems extraordinarily counter-intuitive. It seems so wrong that any belief system that resulted in it would have to be incredibly suspect.
The Critical Level View
You could try to avoid the average view and the RC by suggesting that there is a critical level that must be passed before lives count positively toward wellbeing. This avoids the repugnant conclusion if the critical level is set above “barely worth living.” However, it results in The Strong Sadistic Conclusion:
For any world of tormented people, a world full of people with lives barely worth living would be worse, provided that the latter world contained enough people (Huemer, 2008).
This must be less plausible than the repugnant conclusion. Under this view, a world filled with people suffering tremendously is worse than a somewhat happy population, provided it is large enough. If you prefer a world filled with suffering in this scenario, we have strayed very far from the idea of welfare maximization. If you are having difficulty imagining this, you can look at the image below:
Once again, this seems even more counterintuitive than the repugnant conclusion. The net positive welfare is not only higher in A, but is actually positive. Whereas the net welfare is negative and concentrated in B. Doesn’t it seem right to prefer a world where the average life is worth living compared to a world where people are tormented?
The Person-Affecting View
Perhaps it’s the case that ethical decisions only matter when they affect living people. The person-affecting restriction is the idea that “An outcome cannot be better (or worse) than another unless it is better (or worse) for someone” (Chappell et al., 2022). Jan Narveson describes this idea as being “in favour of making people happy, but neutral about making happy people” (Narveson, 1973). This view would not allow for decisions between hypothetical world states of entirely possible people. We can’t compare Z to A in this scenario. However, we can meaningfully talk about potential changes in an existing population. Huemer (2008) describes a counter-intuitive result of this idea:
Narveson’s view has counterintuitive consequences. Suppose it were possible to slightly increase the welfare of presently-existing people while creating ten billion new people all of whom would lead lives of constant agony. On Narveson’s view, this would not be worse than the actual world, for it could be worse only if it were worse for someone. By hypothesis, it would be better for the presently-existing people. And it would be neither better nor worse for the ten billion new sufferers, for they have no welfare level at all in the actual world. Since, on Narveson’s view, the proposed change would benefit some actual people while harming no one, we would have to view it as an improvement.
One can try to escape this viewpoint by endorsing the procreative asymmetry—the view that creating suffering people is bad, but creating happy people is neutral. But you can probably guess this results in ridiculous implications as well. For example, you would prefer B to A in the image below.
Variable Value Theories
This viewpoint takes the perspective that there are diminishing returns to additional people such that you have something close to the total view for small populations but something close to the average view for larger populations. Chappell et al. (2022) describe some issues:
First, in approximating the average view at large population sizes, they risk susceptibility to the same objections. So, to avoid approving of adding (above-average) negative lives to the world, variable value theorists must invoke an asymmetry according to which only the value of positive lives diminishes but not the disvalue of negative lives. Adding negative lives to a world always makes the world non-instrumentally worse, on such a view, even if it happens to improve the average. However, such an asymmetry leads to an analogue of what Parfit calls the absurd conclusion: that a population considered to be good, with many happy and few miserable lives, can be turned into a population considered to be bad merely by proportionally increasing the number of both positive and negative lives (Parfit, 1984, ch. 18). To escape this objection, variable value theorists must allow additional good lives to sometimes compensate for additional bad lives, without introducing further unintended consequences that undermine their view. This is no easy task (see: Chappell, 2021).
Huemer (2008) describes an additional issue with variable value theories called the Egyptology objection:
Furthermore, all Variable Value theories face the Egyptology objection: these theories imply that the value that a person’s life adds to the world can depend upon how numerous and/or how prosperous are the members of some remote society that has no interaction with the person in question. Thus, on a Variable Value theory, how much reason I now have to produce children depends in part on how happy the ancient Egyptians were and how many of them existed—even when the facts about the ancient Egyptians have no bearing on how my children’s lives would go, nor on how anything else in the future would go. This seems absurd; as Parfit observes, ‘research in Egyptology cannot be relevant to our decision whether to have children’ (Parfit 1984, p. 420).
Non-Human Concerns
If we begin weighing the value of worlds with strong consideration of the quantity of of humans in existence, what do we do about other sentient beings? These sorts of functions only consider the human population, but surely the suffering and well-being of animals matter. If we are averaging, do we include chickens? What about rats? Nematodes? Insects? If you’re only willing to consider humans, do you consider fetuses? What about embryos?
You could totally disregard animal well-being, but then it would be the case that torturing a puppy for fun could be a net good. That seems wrong. If you only count animal utility, but not animal quantity, then you will get absurd implications from average utilitarianism. For example, if we think animal life is good on net, then it would be good to rapidly decrease the size of the human population. As it approached zero, the average welfare would approach infinity. Similar pardoxes could be created for the other population ethics viewpoints above regarding animals.
VII. On Maximizing Welfare
Of all the above viewpoints, the repugnant conclusion seems the most reasonable in my view. Utilitarians should like the total view because it maintains the property that the higher utility outcome is the better one. This feels quite close to the true desire of utilitarianism—to prefer higher utility outcomes. It seems odd to me for a utilitarian to be asked “Here is a scenario with two possible outcomes, A and B. Outcome B results in significantly higher wellbeing than outcome A. Which would you pick?,” and to have her respond with “I don’t know” because she doesn’t know about the allocations and quantity of people involved.
In my mind, that shouldn’t really matter. If the only good thing in the world is human flourishing, then it seems weird to have other side considerations such as the distribution of that flourishing. I have described my belief to utilitarian-minded people that truth is inherently good, and they have disagreed with the viewpoint that truth is only good insofar it results in good consequences. I think a similar response should come for utilitarians who want to avoid the RC. Avoiding absurdity is only good insofar as it helps you increase welfare. What other terminal goal do you have?
VII. Same Premise, Different Implications
Will MacAskill believes that it’s really important that we consider the future seriously—a viewpoint called longtermism. The full definition of Longtermism is: “(i) Those who live at future times matter just as much, morally, as those who live today; (ii) Society currently privileges those who live today above those who will live in the future; and (iii) We should take action to rectify that, and help ensure the long-run future goes well.” If he started a lengthy essay with the assumption that “those who live at future times matter just as much, morally, as those who live today” and argued in favor of preventing climate change, I think many people in the rationalist crowd would be receptive.
In this context, this sort of premise wouldn’t be as controversial. But if you apply it to other situations to reach more conservative conclusions like preventing abortion is good or boosting fertility is important, they’re less receptive. However, it seems reasonable to ask why I should care a whole lot about trillions of potential people in the future, if I shouldn’t care about billions of people in the near future. Perhaps you reject longtermism and I haven’t addressed your exact scaling function. In that case, describe it in the comments. All that I ask is you lay it out specifically and explain why you hold this position.
Here is an example of someone taking fertility serious from an EA perspective:
>There has to be some ecological capacity, but I don’t think we’re near it. Life keeps getting better for future generations, not worse
I think this statement oversimplifies ecological capacity. At an extreme there is a theoretical maximum ecological capacity if every human being ate vat-grown mold to absolutely maximize calorie production, but that is unrealistic. More practical ecological capacity has increased markedly with advances in irrigation, GMOs, mechanized agriculture, fertilizer, pesticides etc., leading us to have more arable land and it being more productive than ever before.
However, this isn't free. Maximizing agricultural productivity entails significant environmental side effects in terms of destroying old growth forests, fertilizer runoff etc. which are simply mandatory to maintain our current population, let alone grow it. A smaller population could live at an ecological capacity for a lower level of agricultural development, whereas we force ourselves to live at a harmful high level of agricultural development.
There's a pretty fundamental philosophical onjection that undermines a lot of these arguments. Welfare doesn't exist in abstract, it's always welfare for some person. A person who isn't born doesn't exist, they have no welfare to count. If a parent is wondering whether to have one child or two, in the future where there is only one child, the child that wasn't born isn't disadvantaged as they don't exist. A world with 100 billion people does not yet exist, those potential excess people aren't harmed by not coming into existence because there is no person to harm.
Furthermore, moving to a future with a lot less people is more sustainable for the human resource base. Our current lifestyle relies on finite resources that become more costly to extract every year. Not to mention the loss of forest, animals etc