Does California's AI bill go too far or fall short? It depends who you ask.
Connecting state and local government leaders
The legislation requires developers of large AI systems to test if they can be used in various extreme scenarios. It has support as well as plenty of detractors in the state’s large tech community.
A bill regulating artificial intelligence safety in California pitting powerful politicians and experienced academics against some of the world’s biggest technology companies appears to be moving towards final passage. But Gov. Gavin Newsom, who has warned previously against overregulating AI, has given no public indication of whether he will sign it.
The legislation could be a model for how other states and the federal government handle the tension between their desire to regulate AI with businesses' wish to be left alone to innovate—and make money—especially when those businesses are among the most powerful in the country. Colorado passed sweeping legislation of its own in May and already looks set to amend it further amid criticism from tech companies and business groups.
The sprawling legislation, sponsored by State Sen. Scott Wiener, whose district includes San Francisco, would require the developers of the biggest AI systems—those which cost over $100 million to train—to test whether they could be used for attacks on critical infrastructure, cyberattacks or terrorism, or to make weapons.
It also would establish CalCompute, a public “cloud” of computers to help host and build AI tools, offer cloud computing services, promote equitable technology development and research “the safe and secure deployment of large-scale artificial intelligence models,” the bill says. Wiener’s legislation also would offer new protections to whistleblowers at companies building AI tools, including contractors.
The latter provision comes on the heels of Daniel Kokotajlo, a former employee of OpenAI, alleging the company was being too reckless in creating its ChatGPT generative AI chatbot and violating its safety protocols. He was subject to the company’s extremely strict off-boarding protocols.
The bill has already passed the California Senate and is moving through committees in the California Assembly.
“With Congress not moving forward, and with the future of the Biden administration's executive order in doubt, California has an indispensable role to play in ensuring that we develop this extremely powerful technology with basic safety guardrails so that we can allow society to experience AI's significant, massive benefits in a safe way,” Wiener said in a speech on the Senate floor in May.
The bill has received bipartisan support and comes as California has looked to take a leadership role on AI among state governments, including by experimenting with generative AI in government operations.
But even leading AI researchers who support the legislation say it could have gone further. In a letter to state leaders earlier this month, eminent AI researchers Geoffrey Hinton of the University of Toronto, Yoshua Bengio of Universite de Montreal and Stuart Russell of the University of California at Berkeley, as well as Lawrence Lessig of Harvard Law School, warned of the “severe risks posed by the next generation of AI if it is developed without sufficient care and oversight.”
“It doesn’t have a licensing regime, it doesn’t require companies to receive permission from a government agency before training or deploying a model, it relies on company self-assessments of risk, and it doesn’t even hold companies strictly liable in the event that a catastrophe does occur,” they wrote. “Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation.”
Others are not so sure. Fei-Fei Li, the co-director of Stanford’s Human-Centered AI Institute who is regarded as the “Godmother of AI,” said in a commentary that while the bill is “well-meaning,” its restrictions on open source development could harm innovation in California and beyond. In response, Wiener said in a written statement that the bill does not ban open sourcing, but allows the Attorney General to initiate enforcement proceedings in limited circumstances.
The California Chamber of Commerce has come out against the bill, as have companies including Facebook parent company Meta, venture capital firm Andressen Horowitz, and a coalition of think tanks, business and political leaders, including the conservative American Legislative Exchange Council.
The latter coalition said having developers guarantee that their AI models cannot be used for harmful purposes even before they begin training them is an “unreasonable and impractical standard.” They also argue that complying with various safety standards “would be expensive and time consuming for many AI companies,” and so might force them to leave the state.
The bill also appears to be a flashpoint in California’s political future. Rep. Nancy Pelosi, the House Speaker Emerita and a powerful player in California—and national—Democratic politics released a mid-August statement, saying Wiener’s bill “is well-intentioned but ill informed” and “would stifle innovation and will harm the U.S. AI ecosystem.” Wiener is regarded as a contender for Pelosi’s San Francisco-area House seat when she retires.
Pelosi echoed many of the concerns raised by Rep. Zoe Lofgren, ranking member on the House Science Committee and another California Democrat.
In a written response to those critiques, Wiener said he rejects “the false claim that in order to innovate, we must leave safety solely in the hands of technology companies and venture capitalists.”
“While a large majority of people innovating in the AI space are highly ethical people who want to do right by society, we’ve also learned the hard way over the years that pure industry self-regulation doesn’t work out well for society,” Wiener continued.
NEXT STORY: In some cities, second thoughts about gunshot detection sensors