Fears about AI’s existential risk are overdone, says a group of experts

The Economist

Fears about AI’s existential risk are overdone, says a group of experts

Full Article Source

IN THE PAST year, as the startling capabilities of artificial intelligence (AI) have emerged into public view, attention has been drawn to the existential risk, or x-risk, that the technology may pose. The concern is that computers endowed with superhuman intelligence might destroy most or all human life. The majority of researchers raising the alarm are sincerely motivated by concern about AI-related risks, present and future. However, calls to action to mitigate superintelligent-AI x-risk may both impede the development of beneficial uses of AIof which there are manyand distract regulators, the public, companies and other researchers from addressing important shorter-term risks. Superintelligence is not required for AI to cause harm. That is already happening. AI is used to violate privacy, create and spread disinformation, compromise cyber-security and build biased decision-making systems. The prospect of military misuse of AI is imminent. Todays AI systems help repressive regimes to carry out mass surveillance and to exert powerful forms of social control. Containing or reducing these contemporary harms is not only of immediate value, but is also the best bet for easing potential, albeit hypothetical. future x-risk. It is safe to say that the AI which exists today is not superintelligent. But it is possible that AI will be made superintelligent in the future. Researchers are divided on how soon that may happen, or even if it will. Still, todays AI models are impressive, and arguably possess a form of intelligence and understanding of the world; otherwise they would not be so useful. Yet they are also easily fooled, liable to generate falsehoods and sometimes fail to reason correctly. As a result, many contemporary harms stem from AIs limitations, rather than its capabilities. It is far from obvious whether AI, superintelligent or not, is best thought of as an alien entity with its own agency or as part of the anthropogenic world, like any other technology that both shapes and is shaped by humans. But for the sake of argument, let us assume that at some point in the future a superintelligent AI emerges which interacts with humanity under its own agency, as an intelligent non-biological organism. Some x-risk-boosters suggest that such an AI would cause human extinction by natural selection, outcompeting humanity with its superior intelligence. Intelligence surely plays a role in natural selection. But extinctions are not the outcomes of struggles for dominance between higher and lower organisms. Rather, life is an interconnected web, with no top or bottom (consider the virtual indestructibility of the cockroach). Symbiosis and mutualismmutually beneficial interaction between different speciesare common, particularly when one species depends on another for resources. And in this case, AIs depend utterly on humans. From energy and raw materials to computer chips, manufacturing, logistics and network infrastructure, we are as fundamental to AIs existence as oxygen-producing plants are to ours. Perhaps computers could eventually learn to provide for themselves, cutting humans out of their ecology? This would be tantamount to a fully automated economy, which is probably neither a desirable nor an inevitable outcome, with or without superintelligent AI. Full automation is incompatible with current economic systems and, more importantly, may be incompatible with human flourishing under any economic regimerecall the dystopia of Pixars Wall-E. Luckily, the path to automating away all human labour is long. Each step offers a bottleneck (from the AIs perspective) at which humans can intervene. In contrast, the information-processing labour which AI can perform at next to no cost poses both great opportunity and an urgent socioeconomic challenge. Some may still argue that AI x-risk, even if improbable, is so dire that prioritising its mitigation is paramount. This echoes Pascals wager, the 17th-century philosophical argument which held that it was rational to believe in God, just in case he was real, so as to avoid any possibility of the terrible fate of being condemned to hell. Pascals wager, both in its original and AI versions, is designed to end reasoned debate by assigning infinite costs to uncertain outcomes. In a utilitarian analysis, in which costs are multiplied by probabilities, infinity times any probability other than zero is still infinity. Hence accepting the AI x-risk version of Pascals wager might lead us to conclude that AI research should be stopped altogether or tightly controlled by governments. This could curtail the nascent field of beneficial AI, or create cartels with a stranglehold on AI innovation. For example, if governments passed laws limiting the legal right to deploy large generative language models like ChatGPT and Bard to only a few companies, those companies could amass unprecedented (and undemocratic) power to shape social norms, and the ability to extract rent on digital tools that are likely to be critical to the 21st-century economy. Perhaps regulations could be designed so as to reduce the potential for x-risk while also attending to more immediate AI harms? Probably not; proposals to curb AI x-risk are often in tension with those directed at existing AI harms. For instance, regulations to limit the open-source release of AI models or datasets make sense if the goal is to prevent the emergence of an autonomous networked AI beyond human control. However, such restrictions may handicap other regulatory processes, for instance those for promoting transparency in AI systems or preventing monopolies. In contrast, regulation which takes aim at concrete, short-term riskssuch as requiring AI systems to honestly disclose information about themselveswill also help to mitigate longer-term, and even existential, risks. Regulators should not prioritise existential risk posed by superintelligent AI. Instead, they should address the problems which are in front of them, making models safer and their operations more predictable in line with human needs and norms. Regulations should focus on preventing inappropriate deployment of AI. And political leaders should reimagine a political economy which promotes transparency, competition, fairness and the flourishing of humanity through the use of AI. That would go a long way to curbing todays AI risks, and be a step in the right direction in mitigating more existential, albeit hypothetical, risks. Blaise Aguera y Arcas is a Fellow at Google Research, where he leads a team working on artificial intelligence. This piece was co-written with Blake Richards, an associate professor at McGill University and a CIFAR AI Chair at Mila - Quebec AI Institute; Dhanya Sridhar, an assistant professor at the Universite de Montreal and a CIFAR AI Chair at Mila - Quebec AI Institute; and Guillaume Lajoie, an associate professor at the Universite de Montreal and a CIFAR AI Chair at Mila - Quebec AI Institute. For a contrary view on AI and existential risk, see this article by Yoshua Bengio.