OpenAI chief set to call for greater regulation of artificial intelligence

May 16, 2023
67 views

OpenAI’s chief executive Sam Altman will tell US lawmakers on Tuesday that regulation of artificial intelligence must allow companies to be flexible and adapt to new technological developments, as the industry faces growing scrutiny by regulators around the world.

Altman, whose company created AI chatbot ChatGPT, will say that “the regulation of AI is essential” as he testifies for the first time before Congress on Tuesday.

His comments come as regulators and governments around the world step up their examination of the fast-developing technology amid growing concerns about its potential abuses.

According to prepared remarks released before the hearing, Altman will tell the Senate judiciary subcommittee on privacy, technology and the law that he is “eager to help policymakers as they determine how to facilitate regulation that balances incentivising safety while ensuring that people are able to access the technology’s benefits”.

Last week, EU lawmakers agreed on a tough set of rules over the use of AI, including restrictions on chatbots such as ChatGPT, as Brussels pushes forward in enacting the world’s most restrictive regime on the development of the technology.

Earlier this month, both the US Federal Trade Commission and the UK competition watchdog fired warning shots at the industry. The FTC said it was “focusing intensely on how companies may choose to use AI technology”, while the UK’s Competition and Markets Authority plans to launch a review of the AI market.

Altman’s testimony will recommend that AI companies adhere to an “appropriate set of safety requirements, including internal and external testing prior to release” and licensing or registration conditions for AI models.

However, he will caveat this by stressing that safety requirements which “AI companies must meet [should] have a governance regime flexible enough to adapt to new technological developments”.

The rapid development of generative AI, which can produce convincing humanlike writing, over the past six months has raised alarm among some AI ethicists.

In March, Elon Musk and more than 1,000 tech researchers and executives signed a letter calling for a six-month break on training AI language models more powerful than GPT-4, the underlying technology OpenAI uses for its chatbot. Earlier this month, AI pioneer Geoffrey Hinton quit Google after a decade at the tech giant in order to speak freely about the risks of the technology, which he warned would amplify societal divides and could be used by bad actors.

“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said Republican senator Josh Hawley in a statement ahead of the hearing.

Christina Montgomery, vice-president and chief privacy and trust officer at IBM, and Gary Marcus, a professor emeritus at New York University, will also provide testimony on Tuesday.

“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Democratic senator Richard Blumenthal, chair of the committee, in a statement.

“This hearing begins our subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology . . . as we explore sensible standards and principles to help us navigate this uncharted territory,” he added.

Additional reporting by Madhumita Murgia

Source: Financial Times