Artificial intelligence (AI) models are progressing rapidly, with developers continuously improving their ability to understand complex queries and provide insightful responses. OpenAI, the creators of ChatGPT, recently announced their new “Strawberry” model as part of the OpenAI o1 series. This development enables AI models to think more and respond in a way similar to humans. Developers aim to refine the models’ thinking process, experiment with different strategies, and learn from mistakes.
While AI is not taking over the world, concerns arise about controlling rogue models and implementing safety measures during development. California lawmakers have passed several AI-related bills, including Assembly Bill 1836, which protects performers’ rights and likenesses by prohibiting unauthorized AI-generated replicas of deceased personalities. Among these bills, Senate Bill (SB)-1047, also known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has faced significant controversy.
If passed, SB-1047 will primarily impact major AI developers like OpenAI, Google, and Microsoft, who develop models with significant computational requirements and high costs. Developers will need to train and fine-tune the models to incorporate safety features outlined in the bill, such as shutdown capabilities, written safety protocols, third-party audits, and compliance statements. However, this bill has received criticism from developers of all sizes, claiming it stifles innovation.
Dina Blikshteyn, a partner at the legal firm Haynes Boone, explained that the bill aims to prevent AI model disasters by mandating shutdown capabilities. She also noted that while the United States lacks a federal framework for regulating AI models, states like California and Colorado are enacting their own regulations. Blikshteyn emphasized the need for federal legislation that sets basic requirements for powerful AI models, benefiting both consumers and developers and providing a standard for all states.
SB-1047 was submitted to Governor Gavin Newsom on September 9 and is awaiting a decision. Newsom has expressed the importance of rational regulation that supports risk-taking without recklessness, although he also voiced concerns about competitiveness. As California is a global leader in tech innovation, its AI-related legal decisions have significant worldwide implications.