AI Innovator Admits Industry Cannot Police Itself

As artificial intelligence continues to expand into various aspects of modern life, many politicians, regulators, and industry insiders are increasingly concerned about the negative impact this new technology could have on society.

The co-founder of one prominent AI company did not help matters when he pleaded with the United Nations Security Council earlier this week for help avoiding what he described as the likelihood of “chaotic or unpredictable behavior” by powerful and largely mysterious computer systems.

Although Jack Clark said his company, Anthropic, is dedicated to pursuing AI in a responsible manner, he acknowledged that the industry as a whole is both unwilling and unable to prevent the technology’s misuse.

Not only does AI’s unpredictable and exploitable nature present “potential threats to international peace, security, and global stability,” these platforms are also affected by “the inherent fragility of them being developed by such a narrow set of actors,” Clark said.

He wants the U.N. and other regulatory bodies to step in and provide clear parameters for the continued development of this technology, stressing that the most important steps to take at this point involve “developing ways to test for capabilities, misuses, and potential safety flaws of these systems.”

Although there seems to be no sign that the current leaders in this emerging industry will lose their grip on controlling AI in the near future, Clark said he is somewhat encouraged by the steps that officials across Europe, in the United States, and elsewhere. Nevertheless, he lamented that there are no universal standards regarding “how to test these frontier systems for things like discrimination, misuse, or safety.”

According to U.N. Secretary-General Antonio Guterres, the international body is “the ideal place” to implement these parameters, adding that the risks associated with AI without such standards in place could include “horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.”

Some AI alarmists have forecasted the possible end of humanity at the digital hands of these supercomputers.

When a large group of tech leaders signed a letter earlier this year calling for a temporary pause in AI development, Eliezer Yudkowski did not add his name. Instead, he argued that the entire system must be completely dismantled in order to protect the human species.

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he wrote at the time. “Not as in ‘maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.’”