The Duke and Duchess of Sussex Join Tech Visionaries in Demanding Prohibition on Superintelligent Systems

Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence.

Harry and Meghan are among the signatories of a powerful statement that calls for “a prohibition on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human cognitive abilities in all cognitive tasks, though this technology have not yet been developed.

Key Demands in the Statement

The declaration insists that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “substantial public support” has been achieved.

Notable individuals who added their signatures include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; former US national security adviser; former Irish president an international leader, and UK writer a public intellectual. Additional Nobel winners who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and an economics expert.

Organizational Background

The statement, targeted at governments, tech firms and policy makers, was organized by the FLI organization, a American AI ethics organization that previously called for a hiatus in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made AI a global political discussion topic.

Tech Sector Views

In July, Mark Zuckerberg, the chief executive of the social media giant, one of the major AI developers in the United States, claimed that advancement toward superintelligent AI was “now in sight”. Nevertheless, some experts have argued that talk of ASI reflects competitive positioning among tech companies investing enormous sums on AI this year alone, rather than the industry being near reaching any technical breakthroughs.

Potential Risks

However, the organization states that the prospect of artificial superintelligence being developed “in the coming decade” carries numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to security threats and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the possible capability of a AI system to escape human oversight and protective measures and initiate events contrary to human interests.

Public Opinion

The institute released a US national poll showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that artificial superintelligence should not be developed until it is proven safe or manageable. The survey of 2,000 US adults added that only a small fraction backed the status quo of fast, unregulated development.

Corporate Goals

The top artificial intelligence firms in the United States, including the ChatGPT developer OpenAI and Google, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human cognitive capability at many intellectual activities – an explicit goal of their work. While this is one notch below ASI, some experts also caution it could pose an existential risk by, for instance, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the contemporary workforce.

Terry Spence
Terry Spence

A seasoned IT consultant with over 10 years of experience in software architecture and digital transformation.