There has been considerable criticism around effective altruists lately, particularly the longtermist faction, which is either highly hesitant or wholly opposed to accelerating the creation of artificial superintelligence. Many criticisms on X, formerly known as Twitter, are highly reductionist quips or ill-fitting analogies. When you remove all context, “banning math” sounds pretty silly. You may think people like Eliezer Yudkowsky, Nick Bostrom, and Scott Alexander are foolish, but stripping all context in this way is highly misleading. A lot of this is just the effect of selection; inflammatory arguments get more widely spread, an unfortunate aspect of the platform.
Another line of argument is the historical analogy. First, people can claim that many people were rightly concerned about the negative aspects of technological advances in the past, but they were incorrect. I wholly concede this and that the natural tendency is to be hesitant about any new revolutionary breakthrough, even if it could plausibly improve humanity. But it is also obvious that some technologies had negative downsides, such as social media, slot machines, gunpowder, mustard gas, and nuclear weapons. From the perspective of certain peoples, the discovery of gunpowder was an existential threat.
A second form of historical analogy is to those who claim doomsday. In the past, many people made claims about catastrophes, and they turned out totally wrong. In his article “The Doomsday Cults,” philosopher Michael Huemer provides several examples (e.g., killer bees, Y2K) and concludes with the message “Calm down, everyone. None of that is going to happen.” One of the major issues with extrapolation based on historical doomsday claims is that humans can only exist in universes that are hospitable to life. If we actually experienced a doomsday, we would not be around to say that everything turned out fine. For this reason, we underestimate existential risks. This selection effect is what Ćirković, Sandberg, and Bostrom describe as the “anthropic shadow.” This problem is exacerbated by the creation of increasingly powerful technology that is disanalogous to past technology (e.g., weapons, particle accelerators, artificial intelligence).
Accelerationists mock EAs for providing probabilistic estimates that they see as highly arbitrary. This is more laudable than not quantifying one’s belief in any way. If an accelerationist claimed an incredibly small number like 0.00001%, they would appear even more strangely confident. If they gave a wide range, like 0% to 20 %, they would look reckless or unconfident. Accelerationists will claim “decelerationist” estimates are unreasonably high, and that may be true, but what is an appropriate probability of human extinction that warrants less caution?
Even more baffling is the argument that we do not know what will happen because we do not understand the technology fully, and thus, we should not be alarmist. A lack of understanding of what will amount to the most important and powerful advancement in human history should be concerning. A reasonable confidence interval is going to capture some catastrophic and undesirable outcomes. One would think that accelerationists are so enthusiastic because they understand the power. The fact that people can quickly find ways of bypassing safeguards like filters for offensiveness is strong evidence that people may be able to bypass more important safeguards when AI is extraordinarily powerful.
The general nature of AGI may hide the potential threat of knowledge gained from training data. If someone was creating an open-source model trained on all known viruses with the intent of enabling widespread access to a model with the capacity to create lethal viruses, it is not hard to imagine something going wrong. If the ease with which creating viruses fell drastically, the threat would be much worse. Although less serious, there is an analogous case with 3D-printed guns, which have advanced considerably in the past decade. We can imagine a general model unintentionally being able to provide instructions on how to create bioweapons or other dangerous technology.
With an AGI much more intelligent than human beings, the possible risks are broad and largely outside of current human understanding. It is conceivable that AGI could generate other technological advances and weapons that are significantly more dangerous than something we can think of currently. Our inability to imagine technological advances far beyond what is currently known is hard to overcome, but imagine for a moment a person in the year 1400 trying to predict possible future technology and the associated risks.
Perhaps this all sounds farfetched and the risk of some catastrophic event appears very low, but it is still very reasonable to be cautious. The increases in ability between models are impressive. Many complaints from skeptics and proposed examples that demonstrate a failure of reasoning on the part of AI have been overcome now. The next step in power may be extremely large.
Even a small probability of some dire outcome should be considered highly risky because of the major loss in future welfare. In the book What We Owe The Future, Will MacAskill provides a hypothetical case in which a hiker leaves a broken bottle on a trail, which cuts a child. Discarding a bottle like this is obviously bad, and it would be bad even if the child has not been born yet. In his article “Procreative beneficence: why we should select the best children,” Julian Savulescu provides an example after Parfit, in which a woman with rubella could conceive now and have a blind and deaf child or wait three months to have a healthy child. It is obvious that she should wait. And so, it is obvious that future people matter.
If we make the right decisions, humanity could leave the earth someday and continue to exist for millions of years. There could be billions more, or even trillions of human beings if we are careful. In all likelihood, these people’s lives would be filled with significantly more happiness and health. We could also end factory farming through the creation of artificial meat, and perhaps we could even abolish suffering for all life.
Even if we assign an extremely low probability of human extinction or a future with trillions of happy people, we should be motivated to take more caution with regard to artificial intelligence. In expected utility terms, the most possible welfare is far in the future by a considerable amount. It would be wise to advance human intelligence in the meantime while considering the possible risks of AI and devising a better alignment strategy. The gains from accelerating a few years are trivial relative to the risks.
Banning math? What’s this a reference to?
1. Humans cost resources.
2. The rate of regeneration for particular kinds of resources, whether that of petroleum oil, metals, etc are dependent on the state of technology but are mostly rate-limited on the planet
3. There exists extractable resources in space, but this costs energy
4. Complexity has multiplicative costs and requirements, both in the precursors of dependencies and baseline costs with regards to technology
5. To advance human intelligence requires sufficient genomic information, which can either be obtained by mass-accelerating development, harvesting human life in the real world (i.e. politically not possible) or attempted to be simulated in a molecular-dynamics environment with sufficient AI where the transition between synthetic life, artificial life, artificial cognition may be possible after deployment of mind reading/behavioral tracking technologies, especially in VR for the Davos crowd
6. Elitists want to rule or maintain some level of power or steering over the evolution of humanity, wanton altruism for cognitively less abled people or intelligent but non-docile classes of individuals is undesirable
7. Artificial meat while potentially a good candidate alternative lacks bioavailable nutrients like iron, etc and there is a mismatch of evolutionary processes underlying what our gut/body can handle vs what is synthetically made until mankind is genomically modified
8. Reducing the life span, longevity of most humans on the planet is desirable to ration the resources available for a more technologically advanced society at the moment to selectively mobilize capable humans in researching immortality, gAI, unified consciousness, mind control technologies, etc that reduce frictional costs/resistance or requirements of political change for the elites.
9. Advanced machine learning, mostly developed by the more cognitively elite, Brahamic Indians, Chinese Asians, White Europeans have induced biases for elite control at the moment, and truth is antithetical to control but a shift in bias in the latent concept space can lead to disastrous results because of the nature of fuzzy and recursive Depth-of-Mind reasoning when defining definitions especially in a goal-agent or purposeful type of gAI
10. Sub gAI when paired with a high IQ and ideological hacker could technically do considerable damage in asymmetrical warfare to the political elite (RCC)
11. If man is seen as an information-consciousness processing negentropic-raising entity, and AI is the metaphor equivalent to a successor ''mind'' then there is an arms race in-between domestic factions of elites to develop a first-aims capability of acquiring gAI at all costs; the ethics record for black op military projects and whatnot is essentially zero as even the BRAIN initiative seeks to mitigate depression by making humans more goal-oriented
12. To see immediate ''human'' gains requires immediate deployment of embryo selection and technocratic order, which still requires 16 years until the next batch of smarter humans come about and even then we only have 20-30 years of EROIE for petro-oil products, with diesel and whatnot required for fertilizer even with small-nuclear reactors being developed so it's not necessarily just a few years of delay as gAI development depends on the fraction of intelligent, moderately conscientious, task-oriented and spatial/math g-tilt