Breaking news in #AI regulation yesterday: California governor Gavin Newsom vetoed California’s controversial AI bill, SB 1047. SB 1047 called for the mandate to implement safety testing for AI models (for models that meet certain criteria) and kill switches. In addition, the bill would have given the Attorney General the power to pursue legal actions if developers are not compliant. SB 1047 has faced strong resistance from many #AI and #bigtech companies including OpenAI, Google, Meta, and high-profile #AI technologists like Yann LeCun. The fear had always been that the bill could hinder #innovation, and that it could cause leading innovators to leave the state of California. Meta, as the backer behind the popular #AI model #LLaMA, was vocal in its opposition that the bill might hold #opensource developers responsible to police open source models more than what is considered reasonable. However, others like Elon Musk and Anthropic support the bill. SB 1047 passed both the California state Senate and the state Assembly. Newsom did indicate that he had reservations about the bill. But before yesterday, it was anyone's guess whether he would approve or veto the bill. In my conversations with regulators and policy makers, the opposition to this bill is viewed as a move to evade responsibilities, as #AI moves to take on more critical tasks and missions. California is the place where the cutting edge #AI innovations happen. It would be a shame if we overregulate #AI, but at the same time, this is absolutely the right place where we consider #safety #trust and #security guardrails for AI. We can all agree that #AI needs to be regulated. But I think many of us differ on the specifics and nuances of how #guardrails should be deployed. SB 1047 will not be the last bill proposed to regulate #AI development I'll be writing more on this topic -- #AI #regulations going forward. Ken Huang, CISSP, Dawn Song, Barmak Meftah, Howie Xu, Nir Polak, Katherine Kuehn, Fei-Fei Li, Yichen Jin, Hawk Kim, Ankur Shah, Joe Sullivan, Nick Reva
Yes, I would expect a revised 1047 bill soon....
Well said Chenxi. This topic should solicit more feedback from a broader sample / crowd - something that is in the realms of a government body. Its easier to place guardrails and content filters than opening it up for unknown.
I agree with the sentiment here. While innovation in #AI should not be stifled, it’s essential to strike the right balance with regulation to ensure safety and trust. Finding the nuances in how we approach these guardrails will be key moving forward.
Well said Chenxi
I'm hiring! Security Engineering Leadership @ Snapchat | Advisor | Author | Girl Dad
5moAI regulation is not a bad thing in principle. Left to its own devices, AI can be destructive for our society. As with all regulation, it’s the details that make the difference. Chenxi Wang, Ph.D. Are you aware of any through bipartisan analysis we can read and reflect on?