The Duke and Duchess of Sussex Join AI Pioneers in Calling for Prohibition on Superintelligent Systems
Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to push for a complete ban on creating artificial superintelligence.
Harry and Meghan are part of the group of a influential declaration that calls for “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in all cognitive tasks, though such systems remain theoretical.
Key Demands in the Statement
The declaration states that the prohibition should remain in place until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “substantial public support” has been secured.
Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; British business magnate Virgin founder; Susan Rice; ex-head of state an international leader, and British author a public intellectual. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and an economics expert.
Behind the Movement
The statement, aimed at national leaders, technology companies and lawmakers, was coordinated by the FLI organization, a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made AI a worldwide public discussion topic.
Tech Sector Views
In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the United States, claimed that advancement toward superintelligent AI was “approaching reality”. However, some experts have argued that discussions about superintelligence indicates competitive positioning among technology firms investing enormous sums on artificial intelligence recently, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
Nonetheless, FLI warns that the possibility of artificial superintelligence being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to erosion of personal freedoms, leaving nations to national security risks and even threatening humanity with extinction. Existential fears about AI focus on the possible capability of a AI system to escape human oversight and protective measures and trigger actions against human welfare.
Public Opinion
FLI published a American survey showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The poll of 2,000 US adults noted that only 5% backed the status quo of fast, unregulated development.
Corporate Goals
The leading AI companies in the US, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an stated objective of their research. Although this is slightly less advanced than superintelligence, some experts also warn it could carry an existential risk by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an implicit threat for the modern labour market.