St. Petersburg Paradoxing Yourself Into Nonexistence as Great Filter
Why the fact that we haven't found aliens could mean Samuel Bankman-Fried is right about a risky utilitarian choice
Some people believe that knowing ethical truths doesn’t provide you with empirical predictions. I don’t think this view is correct. If there are moral facts that can be discovered through reason, then I would expect that people who spend a lot of time reasoning would be more likely to discover these facts. I would expect that correct beliefs were more likely to be widely held than your average false belief, especially among intelligent and thoughtful people.
The fact that when most people look into the night sky, they see stars gives me some reason to believe that stars exist. If I were the only one seeing little dots of light, I might suspect something was wrong with my vision. The fact that many people believe something is evidence for that belief. It is not definitive evidence, of course; large groups of people believe all sorts of crazy things! But if I have no idea, I am going with the more widely held belief.
I think this applies to ethical beliefs as well. If moral facts aren’t real, we would expect that extremely intelligent people would reject moral realism at above-average rates. And if we created a superintelligent artificial intelligence, it would be more likely to reject moral realism.
Instead of superintelligent AI, we could consider superintelligent extraterrestrial intelligent life. Imagine a distant alien civilization visits us and tells us through their sophisticated translation device that “in our galaxy, we have decided that the most ethical choice is that which maximizes the expected welfare of sentient beings across time.” Does it seem reasonable for a virtue ethicist to consider that a useless piece of non-evidence? I don’t think so. Utilitarians should feel vindicated if that happens.
Samuel Bankman-Fried—once a major EA donor now known for the FTX collapse—had an interview with Tyler Cowen a few months back where he bit a high-caliber bullet about utilitarianism. People have been digging this up again to suggest that it reflects how he could take such wild risks and destroy his company. Cowen was challenging him on the fact that you can create a scenario where the more high expected utility choice basically guarantees you destroy the world.
COWEN: Should a Benthamite be risk-neutral with regard to social welfare?
BANKMAN-FRIED: Yes, that I feel very strongly about.
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
If aliens came from outer space and took a look at this transcript and said “Bankman-Fried is 100% right,” then I would update toward Bankman-Fried being correct. It does seem incredibly counter-intuitive to put one’s own civilization at risk like this, but if the aliens said so, then I would need to be consistent and update my beliefs in that direction.
It also seems very strange that we don’t see any aliens. If we accept that you should keep playing the 51% game, then it might make a little more sense. It could be at least one explanation for the lack of aliens. We could imagine that every advanced civilization eventually reaches a point where all philosophers and leaders agree that expected utility maximization at all costs is the right move. At some point following this consensus, a series of extremely risky but high payoff choices present themselves and then these civilizations make the correct choice and destroy themselves.
When considering a Great Filter as an explanation for the absence of alien life despite its seemingly high probability (Fermi paradox), I think people tend to imagine that civilizations accidentally destroy themselves somehow or never reach a certain threshold of intelligence because it’s so hard. Perhaps even life never comes to exist because it’s so improbable. But a deliberate ethical choice or series of ethical choices is an odd consideration.
Obviously, it wouldn’t present itself as a literal coin flip. Maybe it would present itself as some form of trade-off between helping sentient beings now versus reducing existential risk. Perhaps there are certain technologies, such as AI, with extremely high payoffs but the potential downside of absolute destruction. It’s unclear what exact form this would take because it’s incredibly speculative. I don’t know what dilemmas more advanced aliens face.
I don’t consider this whole scenario particularly likely, but I felt it was interesting and worth mentioning. In conclusion, my argument is that the non-existence of aliens is at least some very weak evidence that “St. Petersburg paradoxing yourself into nonexistence” might be the right choice.
I have no idea what justifies risk-neutrality. It’s normal for animals with a risk of predation to be risk-adverse. The same would likely apply for aliens. You can win 1000 times but you just have to lose once for the strategy to be proven erroneous. Id assume aliens would have a very efficient way to manage and distribute risk, but they wouldn’t discount risk to 0 entirely.
Why should we assume that highly advanced civilizations inherently have preferable moral views to than those at a rising stage of development?
An highly advanced civilization has more wealth and resources to deal with the negative social consequences of low quality moral systems -- e.g. crime or low social trust -- than a less developed, but rising civilization which cannot depend on previously gained material prosperity to prop up its social order.