Former Google CEO Eric Schmidt, along with other tech leaders, has published a policy paper arguing against a U.S. government-led "Manhattan Project" approach to developing Artificial General Intelligence (AGI), citing risks of international tensions and potential cyberattacks. As reported by TechCrunch, the paper titled "Superintelligence Strategy" advocates for a more measured approach that prioritizes AI safety and international cooperation.
The policy paper outlines several key arguments against a government-led AGI "Manhattan Project":
International tensions could escalate, potentially triggering a dangerous AI arms race, particularly with China12
Rival nations fearing a global power imbalance in superintelligence might resort to sophisticated cyberattacks to disrupt U.S. AI advancements34
The assumption that competitors would acquiesce to an enduring imbalance or omnicide rather than taking preventive action is fundamentally flawed15
These concerns stem from recent U.S. congressional proposals for a Manhattan Project-style effort to fund AGI development, modeled after the 1940s atomic bomb program46. The authors argue that such an approach could destabilize international relations and compromise global security in the pursuit of artificial general intelligence.
The concept of Mutual Assured AI Malfunction (MAIM) introduces a novel approach to AI policy and international relations. This strategy, proposed by Schmidt, Wang, and Hendrycks, draws parallels to nuclear deterrence while addressing the unique challenges of AGI development:
MAIM suggests proactively disabling threatening AI projects rather than waiting for adversaries to weaponize AGI1
It advocates for expanding cyberattack capabilities to neutralize dangerous AI developments in other nations1
The strategy aims to limit adversaries' access to advanced AI chips and open-source models1
MAIM represents a shift from "winning the race to superintelligence" to deterring other countries from creating potentially harmful AGI1
This approach acknowledges the reality of international competition while prioritizing global AI safety2
By proposing MAIM, the authors present a "third way" between extreme positions of rapid, unchecked AI development and complete halting of AI progress, emphasizing responsible advancement with built-in safeguards3.
Eric Schmidt's stance on AI competition has evolved significantly, reflecting a growing concern about the risks of unchecked superintelligence development. This shift is evident in the recent "Superintelligence Strategy" paper:
Schmidt now advocates for a more cautious approach to AGI development, moving away from the idea of "winning" an AI race12
He emphasizes the importance of defensive strategies and international cooperation in AI advancement1
The paper introduces the concept of Mutual Assured AI Malfunction (MAIM) as a deterrent against hostile AI development13
Schmidt warns that a U.S.-led "Manhattan Project" for AGI could prompt dangerous countermeasures from rival nations4
He argues for expanding cyberattack capabilities to disable threatening AI projects, a stark contrast to previous pro-competition stances13
This evolution in Schmidt's thinking suggests a deeper understanding of the potential global consequences of an unchecked AI arms race, prioritizing safety and stability over technological dominance25.
Instead of a high-stakes race for AGI supremacy, the authors advocate for a more measured, defensive strategy that prioritizes AI safety, focuses on deterring hostile AI development, and promotes international cooperation12. This approach aims to mitigate the risks associated with unchecked superintelligence development while still advancing AI technology. The paper suggests that fostering collaboration and shared safety standards could lead to more stable and beneficial outcomes in the long-term pursuit of AGI, contrasting sharply with the competitive "Manhattan Project" model34.