WEF Global Risks Report 2024: AI and Quantum Computing

The World Economic Forum (WEF) has recently highlighted artificial intelligence (AI) and quantum computing as emerging global risks, expressing concerns about their potential impact on various aspects of society. The WEF warns that the deployment of AI could have adverse effects on healthcare, information integrity, and labor markets. Moreover, issues related to market concentration and national security incentives might limit the effectiveness of guardrails for AI development, raising questions about the scope and oversight of AI technologies.
The report underscores the potential creation of new divides between those who can access or produce technology resources and intellectual property (IP) driven by advanced AI. There is a recognition that the deeper integration of AI in conflict decisions could lead to unintended escalation, while open access to AI applications may asymmetrically empower malicious actors, posing risks to global stability.
The document further emphasizes that the unbridled proliferation of increasingly powerful, general-purpose AI technologies will bring about profound transformations in economies and societies over the next decade. While acknowledging the productivity benefits and breakthroughs in areas like healthcare, education, and climate change, the report also highlights the major societal risks associated with advanced AI. These risks are expected to interact with advancements in other frontier technologies, including quantum computing and synthetic biology, amplifying their adverse consequences.
A notable concern is the lack of application of the precautionary principle in the development of AI to date, where regulators have often prioritized innovation over prudence. The report notes that the rapidly evolving nature of AI, combined with its increasing reliance across various sectors, outpaces our ability to understand the technology itself, often referred to as the "Black Box Problem," and to create effective regulatory safeguards, known as the "Pacing Problem." The speed of advances, market power, and the strategic importance of the AI industry will continue to challenge governance institutions, potentially endangering political systems, economic markets, and global security.
The report also delves into the escalation of AI in military contexts, highlighting the potential threats to global stability over the next decade. The integration of machine intelligence into conflict decision-making is identified as a severe risk. Cyber warfare capabilities are expected to be boosted by AI, enabling autonomous offensive and defensive systems with unpredictable impacts on networks and connected infrastructure. The development of AI-driven weapons systems, with increasing autonomy in land, air, and sea-based platforms, raises concerns about international governance. The lack of established agreements on the use of autonomous weapons systems, coupled with the potential for autonomous decision-making on lethal actions, poses significant risks, including the potential for miscalculation and conflict initiation.
The most critical risk identified in the report lies in the application of AI to nuclear weapons. While governments claim to maintain human control over these systems, the report notes that AI may offer advantages in terms of decision time, potentially condensing decisions at silicon speed. However, AI-enabled launch systems could erode strategic stability, making detection by rival states nearly impossible. The incorporation of AI into nuclear weaponry increases the risk of accidental or intentional escalation over the next decade, carrying potentially existential consequences.
In contrast to the upstream tech stack, the downstream application of AI is considered a more competitive market. Despite being among the most powerful emerging dual-use technologies, economic and technical barriers to accessing generative AI applications are lower than for other technological counterparts, such as geoengineering and quantum computing. The report raises concerns about sudden and widespread access to generative AI applications, as internet access effectively equates to access to these models. Malicious actors could leverage extensive knowledge to conceptualize and propagate dangerous capabilities, posing threats to human rights and safety in various ways, from misinformation and malware to potential biosecurity risks.