7 Comments

I totally forgot I hadn't finished reading this. On fundamental issue that strikes me is the inappropriate metaphor of humanity as teenager.

Teenagers do stupid stuff because they don't know any better, and might make bad trades regarding current good against future good. Ok, fine, that makes sense.

The trouble starts in that teenagers have the example of adults to tell them what is a good idea vs a bad idea in terms of trade offs. Humanity doesn't have another species to listen to about "Oh, yea, you don't want to be doing that! I did that and really regretted it." We're it. We don't know what the good outcomes are, or what the bad outcomes are, or really how to steer from one to the other, and no one can offer advice because we don't have another species' experiences to work with. All we have is our own history and our own intuitions about the future. In other words, humanity is an adult, trying to figure things out as we go along.

Which leads to the second problem, one very close to human experience: adults treating other adults as teenagers (minors) never goes well. MacAskill puts himself in the position of advisor to the human species, the species he views as a teenager. MacAskill, in other words, puts himself in the position of the older, successful non-human species offering advice to the screw up humans.

It takes a certain amount of hubris for an adult to tell another adult how to run their lives as though they were minors. It perhaps takes a lot more to tell an adult species how to run its life as though you were a member of a species that is also an adult.

I am assuming, here, that MacAskill is not some sort of Eldar or something. Maybe I am wrong.

Expand full comment

Great article!

I’m not sure about the concept of “value lock in.” We don’t have information on the best values for future centuries nor are we in a position to make reasonable predictions.

It seems more important to maintain institutions and values that allow for dialogue and reassessment, like protecting free speech and promoting authenticity, rather than establish eternal values.

Expand full comment

The argument against longtermism is that the future, especially the far future, is so uncertain that its hard to know if our utility calculations have any value.

If you'll permit a nerdy fictional example. There is an episode of Star Trek where a group of genius autistics conclude that:

1) It's inevitable that the Federation will lose a war to the Dominion.

2) That the best thing to do therefore is to give classified info the the Dominion to end the war early and save billions of lives.

3) There will inevitably be a Rebellion centering on earth that overthrows the Dominion in the far future even if the war is lost.

To my knowledge the counter argument in the episode isn't that these calculations are wrong (though we do find out they are in later episodes*), the protagonist even admits the math is correct. Only that one shouldn't accept such calculations regardless.

*The prediction is wrong in two ways. The first is that the federation does win the war at the end of the series. The second is that if they lost, the Dominion understands that eradicating the earths entire population is the only way to keep control.

Expand full comment