I totally forgot I hadn't finished reading this. On fundamental issue that strikes me is the inappropriate metaphor of humanity as teenager.

Teenagers do stupid stuff because they don't know any better, and might make bad trades regarding current good against future good. Ok, fine, that makes sense.

The trouble starts in that teenagers have the example of adults to tell them what is a good idea vs a bad idea in terms of trade offs. Humanity doesn't have another species to listen to about "Oh, yea, you don't want to be doing that! I did that and really regretted it." We're it. We don't know what the good outcomes are, or what the bad outcomes are, or really how to steer from one to the other, and no one can offer advice because we don't have another species' experiences to work with. All we have is our own history and our own intuitions about the future. In other words, humanity is an adult, trying to figure things out as we go along.

Which leads to the second problem, one very close to human experience: adults treating other adults as teenagers (minors) never goes well. MacAskill puts himself in the position of advisor to the human species, the species he views as a teenager. MacAskill, in other words, puts himself in the position of the older, successful non-human species offering advice to the screw up humans.

It takes a certain amount of hubris for an adult to tell another adult how to run their lives as though they were minors. It perhaps takes a lot more to tell an adult species how to run its life as though you were a member of a species that is also an adult.

I am assuming, here, that MacAskill is not some sort of Eldar or something. Maybe I am wrong.

Expand full comment

Those are interesting thoughts. I think you’re getting at the issue of what authority effective altruistic have and how can they be sure they are correct. Also, there is a sense of liberty. Shouldn’t people have choices over their own life?

I guess I would say I trust Will MacAskill more than a lot of the systems we have in place currently.

Expand full comment

I think the choice of metaphor is telling, yes. It suggests the need for guidance by wiser hands, and moreover that those wiser hands exist. (And presumably have the surname MacAskill...) That's a bad sign in my book, as "we are all doomed unless you follow me!" is the hallmark of people who love control more than they love truth or humanity.

It makes perfect sense to me to say "Look, we don't know all the relevant trade offs, and we need to keep on eye on the fact that humanity exists a lot longer than its constituent parts (e.g. us), so we ought to keep an eye towards making sure we don't wreck the future for our kids." It makes a lot less sense to follow that up with "So we had better start controlling how many of those kids we have, etc. etc."

As you put it, they (EA people and frankly all the "let us/government run everyone's lives" types) are optimizing around a lot of very questionable assumptions. For example, Malthus and his advocates have been wrong pretty consistently since, well, Malthus was writing, yet people still think this time will be the time he's right. Ditto for socialism, communism, Marxism working this time.

Now, I don't really know MacAskill from Adam, and haven't read his book, so he might well have better intuitions around how society's behavior should be optimized for the long term. My problem is more with the very notion that we can optimize for humanity's long term whatever. Worse, that having figured out the optimal course, we can control humanity such that it is achieved. Optimizing complex dynamic adaptive systems is a fool's game at the best of times, and vastly worse when you don't have anywhere near the information to do it. In reality it will tend to become a tool of arbitrary authoritarianism, just like centralized optimization of the economy tends to do.

Expand full comment

Interesting. I wonder how you would feel about the book. I don't get a very authoritarian vibe from MacAskill. I didn't see anything about controlling how many children people get. I guess my stance is that some people are guiding the future and that EA is a positive influence. I don't get totalitarian vibes like with a lot of other movements, and EA is usually self critical.

Expand full comment

Yea, I am drifting into talking about two different things, MacAskill who I am only familiar with through what you and Scott A have written, and EA in general which I am more generally familiar with although not in a deep dive sense. I can't be fair to MacAskill without reading the book, but I will say that any time someone starts talking about whether or not the number of people we have and are creating is optimal, and what we can do to change that, I start to get really itchy. It is one of those sorts of things that there are probably good points to consider and think about, but it seems that every time someone does they come up with bad ideas, and when they try to operationalize it it gets horrible. Huge red flags start waving around for me.

As to EA... I don't know how positive they really are as a group. On the one hand, their bailey of "If you are going to give to charity, you really ought to try and give to ones that actually accomplish what good they promise to do, and we are looking to quantify that" is a good one. A consumer reports sort of deal to help people get more good for the buck is a worthwhile enterprise.

On the other hand, though, the motte of "We know what kind of good you SHOULD want to do" I think is a lot more fraught with hubris and unacknowledged assumptions. A lot of what makes EA a cult I think stems from there. Once one starts making those sorts of statements, picking goals instead of examining how to best achieve goals, it has moved from objective measurement to subjective, and yet their language is always in the vein of certain knowledge. Just replace divine revelation with utility maximizing math, and you are there.

Expand full comment

Great article!

I’m not sure about the concept of “value lock in.” We don’t have information on the best values for future centuries nor are we in a position to make reasonable predictions.

It seems more important to maintain institutions and values that allow for dialogue and reassessment, like protecting free speech and promoting authenticity, rather than establish eternal values.

Expand full comment

The argument against longtermism is that the future, especially the far future, is so uncertain that its hard to know if our utility calculations have any value.

If you'll permit a nerdy fictional example. There is an episode of Star Trek where a group of genius autistics conclude that:

1) It's inevitable that the Federation will lose a war to the Dominion.

2) That the best thing to do therefore is to give classified info the the Dominion to end the war early and save billions of lives.

3) There will inevitably be a Rebellion centering on earth that overthrows the Dominion in the far future even if the war is lost.

To my knowledge the counter argument in the episode isn't that these calculations are wrong (though we do find out they are in later episodes*), the protagonist even admits the math is correct. Only that one shouldn't accept such calculations regardless.

*The prediction is wrong in two ways. The first is that the federation does win the war at the end of the series. The second is that if they lost, the Dominion understands that eradicating the earths entire population is the only way to keep control.

Expand full comment