12 Comments

It seems to me Alexander is doing roughly the same thing I accuse Rothbard of in my libertarian moral psychology post (https://chefstamos.substack.com/p/towards-a-better-libertarianism), i.e., starting from an ethical theory that seems prima facie sensible to him and then just straight up refusing to engage seriously with its implications or philosophical objections. Which is fine if someone is doing ethics for their own personal benefit, but I'm not sure why I should care what they think about ethics if they write off obvious objections as "philosophical muggings" (I'm sure Rothbard would have appreciated that term).

Expand full comment
author

Ah, yeah, I remember that article. It's good to see this writing on libertarianism. Nice point.

Expand full comment

Nice essay. Admittedly, I have trouble getting past "How are you even measuring all these people's utility, much less adding it up and doing summary statistics on it, then figuring out how your actions will impact those numbers", but your examples helped keep interest.

I also applaud you for addressing Scott's (and Hanania's) rather repugnant recent tendency towards "Hell with it, I do what I want, but will keep calling it morally right!" It is one thing to say "Look, I don't know what the big picture effects are, but I am going to try and do what seems best within arms reach of me anyway," and quite another to say "I have this extremely strong sense that I can accurately decide whether it is better to add or remove a few million people in the future, but I don't feel the need to follow that process to its logical conclusions if I don't like them." Apparently all the cool kids are just arbitrarily deciding on what it is to be good and refusing to ever examine those decisions. Very disappointing.

Expand full comment
author

Thank you!

I think that it is theoretically possible to do this although it's extremely difficult. These scenarios are assuming we can just measure things perfectly. It's already so difficult to talk about population ethics that introducing the variable of how we're going to measure this stuff just makes things incredibly complicated. I expressed some of my uncertainty especially with measuring the welfare of wild animals, especially something like a nematode for example.

I think we all do this to some extent, but we should try to avoid it. And we shouldn't really accept that as a philosophical answer.

Expand full comment

I think it is isn't even theoretically possible, in fact. At that point one is just assuming omniscience, at which point we are not discussing a system of ethics for humans in a very important way. One might as well just assume omnipotence too, and say "I will simply ensure that everyone is better off if there are more/fewer people, and I know exactly how to do that." Because, you know, you would.

I think any sort of ethical system for humans has to start with "We know almost nothing about the state of the world, let alone universe, and how our actions will affect that in the short or long run," and then build from there. It took millennia for humans pursuing commercial endeavors to be viewed as actually probably beneficial to people other than the merchant, and even then there's a lot of people who aren't sure somehow despite how annoyed they are when the shops aren't open. If it takes so long for people to get a grasp on the implications of day to day, common place behaviors I don't think questioning whether India is better off with 1 billion people or 1.4 billion people is going to matter. Likewise, I am not sure we are good at figuring out if helping 10 people in the USA get a job is better over all than helping 100 people in Africa have a better well or something.

(Now, if it were starting a company so 100 people in Africa had jobs installing better wells people paid for, that'd be something! Funny how people like philanthropy more than starting a business that does the same thing but pays for itself.)

Expand full comment

Great post. I enjoyed reading as your intellectual chainsaw ground up the strange pragmatico-utilitarianism they are trying to push.

Expand full comment
author

Thanks. I think your analogy is funny.

Expand full comment
deletedAug 25, 2022Liked by Ives Parr
Comment deleted
Expand full comment
author

I think if variation between populations was high and you had good reason to think that one culture would benefit more from a transfer, it would be better to give it to the culture which would benefit more, everything held equal. That's a difficult thing to discover I would think. I don't think this entails anything about a master race though, nor do I think that it would necessarily be consistent across time.

I don't think this is CRT with algorithms, but if it is in some strange way, then fine it's "CRT with algorithms." But at least it's using empirical evidence to try to help people and make the world better rather than embracing power-knowledge, standpoint theorem and so forth to bring forth strange grievances. Effective altruism is saving actual lives. If CRT was doing that much good, I would be more sympathetic, but I think CRT is just harming race relations.

Thanks for the comment

Expand full comment
Comment deleted
Expand full comment
author

Yeah, I think figuring out true tradeoffs is difficult. That isn't a reason to not try to do it though. It seems possible to figure it out through evaluating level of suffering in people's own lives. For example, I would rather get a splinter than stub my toe. We all take risks which could destroy a sacred value (our life) in order to obtain lesser values (I want ice cream and it's 10 pm at night and I have to drive my car to go get it so it's a 0.00003% chance I die).

Expand full comment
Comment deleted
Expand full comment
author

I personally believe that we should have individual freedom to make choices that are not always maximizing utility. I believe in weak natural rights. Some reject this view like some total utilitarians. I think smelling good is morally good to some extent but not obligatory. I'm not personally a contractualist, but I could see how you would argue from that perspective. I hope I've address your concerns.

Expand full comment
Comment deleted
Expand full comment
author

I think we can evaluate societies as good or bad. We can evaluate choices as ethical or unethical. To produce a lower utility society is unethical according to a total utilitarian who believes that utility maximizing choices are morally obligatory. The society existing is not unethical, but it is potentially not optimal. From this perspective, it could be the case that not having children is not ethical for some. It depends on where you draw the line between what is morally obligatory and what is morally supererogatory. There are nuanced positions on this.

If you believe that we ought to spend all our time doing the most ethical action possible, then reproduction might be one of many. It might be better to spend all our time working to donate to GiveWell or some other charity instead. What is the morally perfect action is not necessarily reproducing, even if reproducing would be morally good. Usually this gets very tricky and complicated because of real world obligations, and so we abstract to talk about population ethics from a very reductionist point of view.

Expand full comment
deletedAug 24, 2022Liked by Ives Parr
Comment deleted
Expand full comment
author

Thank you!

Maximum possible happiness is the best good. When making choices between possible worlds, you want to compare which is better, because both could be good in some sense. I think that consequentialists should pursue the maximally good world. They aren't necessarily fake, but I think that the total utilitarian view makes the most sense for reasons I laid out in the article.

I would personally prefer to live in the very happy small society. However, the trillion person society would ethically be better. This is just the Repugnant Conclusion, which I accept. If I had to choose one to exist, I would choose the trillion person society.

Expand full comment