A new report co-authored by the artificial intelligence pioneer Fei-Fei Li urges lawmakers to anticipate future risks that have not yet been conceived when drawing up regulations to govern how the technology should be used.
The 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom shot down the state’s original AI safety bill, SB 1047. He vetoed that divisive legislation last year, saying that legislators need a more extensive assessment of AI risks before they attempt to craft better legislation.
Li (pictured) co-authored the report alongside Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar and the University of California at Berkeley College of Computing Dean Jennifer Tour Chayes. In it, they stress the need for regulations that would ensure more transparency into so-called “frontier models” being built by companies such as OpenAI, Google LLC and Anthropic PBC.
They also urge lawmakers to consider forcing AI developers to publicly release information such as their data acquisition methods, security measures and safety test results. In addition, the report stressed the need for more rigorous standards regarding third-party evaluations of AI safety and corporate policies. There should also be protections put in place for whistleblowers at AI companies, it recommends.
The report was reviewed by numerous AI industry stakeholders prior to being published, including the AI safety advocate Yoshua Bengio and Databricks Inc. co-founder Ion Stoica, who argued against the original SB 1047 bill.
One section of the report notes that there is currently an “inconclusive level of evidence” regarding the potential of AI to be used in cyberattacks and the creation of biological weapons. The authors wrote that any AI policies must therefore not only address existing risks, but also any future risks that might arise if sufficient safeguards are not put in place.
They use an analogy to stress this point, noting that no one needs to see a nuclear weapon explode to predict the extensive harm it would cause. “If those who speculate about the most extreme risks are right — and we are uncertain if they will be — then the stakes and costs for inaction on frontier AI at this current moment are extremely high,” the report states.
Given this fear of the unknown, the co-authors say the government should implement a two-pronged strategy around AI transparency, focused on the concept of “trust but verify.” As part of this, AI developers and their employees should have a legal way to report any new developments that might pose a safety risk without threat of legal action.
It’s important to note that the current report is still only an interim version, and that the completed report won’t be published until June. The report does not endorse any specific legislation, but the safety concerns it highlights have been well-received by experts.
For instance, the AI researcher Dean Ball at George Mason University, who notably criticized the SB 1047 bill and was happy to see it vetoed, posted on X that it’s a “promising step” for the industry. At the same time, California State Senator Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the “urgent conversations around AI governance” that were originally raised in his aborted legislation.
Photo: Steve Jurvetson/Flickr
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU