Products You May Like
Google and OpenAI, two U.S. leaders in artificial intelligence, have opposing ideas about how the technology should be regulated by the government, a new filing reveals.
Google on Monday submitted a comment in response to the National Telecommunications and Information Administration’s request about how to consider AI accountability at a time of rapidly advancing technology, The Washington Post first reported. Google is one of the leading developers of generative AI with its chatbot Bard, alongside Microsoft-backed OpenAI with its ChatGPT bot.
While OpenAI CEO Sam Altman touted the idea of a new government agency focused on AI to deal with its complexities and license the technology, Google in its filing said it preferred a “multi-layered, multi-stakeholder approach to AI governance.”
“At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a ‘Department of AI,'” Google wrote in its filing. “AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed.”
Others in the AI space, including researchers, have expressed similar opinions, saying that government regulation of AI may be a better way to protect marginalized communities — despite OpenAI’s argument that technology is advancing too quickly for such an approach.
“The problem I see with the ‘FDA for AI’ model of regulation is that it posits that AI needs to be regulated separately from other things,” Emily M. Bender, professor and director of the University of Washington’s Computational Linguistics Laboratory, posted on Twitter. “I fully agree that so-called ‘AI’ systems shouldn’t be deployed without some kind of certification process first. But that process should depend on what the system is for… Existing regulatory agencies should maintain their jurisdiction. And assert it.”
That stands in contrast to OpenAI and Microsoft’s preference for a more centralized regulatory model. Microsoft President Brad Smith has said he supports a new government agency to regulate AI, and OpenAI founders Sam Altman, Greg Brockman and Ilya Sutskever have publicly expressed their vision for regulating AI in similar ways to nuclear energy, under a global AI regulatory body akin to the International Atomic Energy Agency.
The OpenAI execs wrote in a blog post that “any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards [and] place restrictions on degrees of deployment and levels of security.”
In an interview with the Post, Google President of Global Affairs Kent Walker said he’s “not opposed” to the idea of a new regulator to oversee the licensing of large language models, but said the government should look “more holistically” at the technology. And NIST, he said, is already well-positioned to take the lead.
Google and Microsoft’s seemingly opposite viewpoints on regulation indicate a growing debate in the AI space, one that goes far beyond how much the tech should be regulated and into how the organizational logistics should work.
“There is this question of should there be a new agency specifically for AI or not?” Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology, told CNBC, adding, “Should you be handling this with existing regulatory authorities that work in specific sectors, or should there be something centralized for all kinds of AI?”
Microsoft declined to comment and OpenAI did not immediately respond to CNBC’s request for comment.
WATCH: Microsoft releases another wave of A.I. features as race with Google heats up