6.8 C
Washington
Thursday, March 28, 2024

Industry Leaders Warn of ‘Existential Threat’ Posed by AI

The newest example of a rising chorus of concerns voiced by the very individuals producing the technology is a one-sentence statement signed by hundreds of AI experts and corporate executives that warns simply that AI presents an existential danger to mankind.

In a statement issued on Tuesday, the nonprofit Centre for AI Safety said that preventing harm from AI should be a worldwide priority alongside other hazards to society and nuclear war.

Over three hundred and fifty academics and business leaders, including Sam Altman, CEO of ChatGPT’s developer OpenAI, and thirty-eight employees of Google’s DeepMind artificial intelligence team, signed the open letter.

Image generators and chatbots that can conduct natural conversations, summarise material, and generate computer code are only two examples of the new “generative” AI that Altman and others have been at the vanguard of bringing to the people. In November, OpenAI released its ChatGPT bot to the public, setting off a competition that saw Microsoft and Google release their own versions in January.

Since then, there has been a growing group of AI researchers who are concerned about a dystopian future in which machines gain consciousness and actively work to wipe out humanity. Another camp of academics counters that this is a red herring that diverts attention from more pressing issues like the possibility of AI replacing human labour, its propensity to lie, and its inherent prejudice.

Critics also point out that the hype around the possible dangers of AI helps the corporations that offer these technologies convince customers that they are safer and more effective than they really are.

Centre for AI Safety director and computer scientist Dan Hendrycks said the letter’s simplicity was intentional so that its message would not be forgotten.

Hendrycks said in an email that acknowledging that the two sides may have fruitful policy debates is necessary. The lesson is not that this technology is overhyped, but rather that the amount of danger is now underemphasized.

More than a thousand people from the academic, corporate, and technological communities signed a separate public statement in late March demanding a complete halt to the creation of new, powerful AI models until regulation could be put in place. Altman and two of Google’s most senior AI employees, Demis Hassabis and James Manyika, have signed the new declaration, but most of the field’s most significant leaders did not sign the previous one. Kevin Scott, Microsoft’s CTO, and Eric Horvitz, the company’s CSO, both put their names to the document.

Sundar Pichai, CEO of Google, and Satya Nadella, CEO of Microsoft, the two most influential business executives in the industry, were conspicuously missing from the letter.

Pichai warned in April that people may not be able to keep up with the rapid advancement of technology, but he remained hopeful because of the ongoing dialogue about the dangers posed by artificial intelligence. Nadella has predicted that artificial intelligence would have far-reaching positive effects, making people more productive and enabling them to do more technically complex activities with less instruction.

Leaders in other fields are also increasing their interaction with influential people in Washington. Altman had a meeting with Vice President Biden earlier this month to talk about regulating AI. Later, he gave testimony on Capitol Hill, where he warned legislators about the potential dangers posed by AI. Altman highlighted some “risky” uses, such as disseminating false information and maybe facilitating more precise drone attacks.

These innovations are no longer the stuff of science fiction. Sen. Richard Blumenthal (D-Connecticut) warned on Tuesday that artificial intelligence posed serious hazards to civilization, including the loss of millions of jobs. He is lobbying Congress to establish rules governing AI.

According to Hendrycks, “ambitious global coordination” might be necessary to address the issue, with solutions perhaps incorporating lessons from the fight against nuclear proliferation and the fight against pandemics. No comprehensive AI governance solutions have been implemented, despite several proposals.

In a recent blog post, Altman speculated that a global body with the authority to examine systems, evaluate their compliance with safety standards, and impose use limitations might be necessary, much how the International Atomic Energy Agency is responsible for the regulation of nuclear technology.

Instead of waiting until AI becomes too powerful to control, Altman told Congress that it was better to get the technology out to many people now, while it was still early, so that society could understand and evaluate its risks.

Some have said that drawing parallels to nuclear technology is exaggerated for fearmongering purposes. Tim Wu, a former White House technology advisor, said that analogizing the danger presented by AI to nuclear fallout is off base and distracts from the discussion of how to rein in the tools.

I agree we should do something about the obvious negative effects of AI and the abuse of AI that we are now seeing, but I don’t think these problems are… he told an interviewer from The Washington Post this week that the technology was similar to nuclear weapons.

A Boyle
A Boyle
I cover Science related topics for The National Era
Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here