By Jody Godoy (Reuters) -California Governor Gavin Newsom signed into state law on Monday a requirement that ChatGPT developer OpenAI and other big players disclose how they plan to mitigate potential catastrophic risks from their cutting-edge AI models. California is the home to top AI companies including OpenAI, Alphabet's Google, Meta Platforms, Nvidia and Anthropic, and with this law seeks to lead on regulation of an industry critical to its economy, Newsom said. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a press release. Newsom's office said the law, known as SB 53, fills a gap left by the U.S. Congress, which so far has not passed broad AI legislation, and provides a model for the U.S. to follow. If federal standards are put in place, Newsom said, the state legislature should "ensure alignment with those standards – all while maintaining the high bar established by SB 53." Last year, Newsom vetoed California's first attempt at AI legislation, which had faced fierce industry pushback. The bill would have required companies that spent more than $100 million on their AI models to hire third-party auditors annually to review risk assessments and allowed the state to levy penalties in the hundreds of millions of dollars. The new law requires companies with more than $500 million in revenue to assess the risk that their cutting-edge technology could break free of human control or aid the development of bioweapons, and disclose those assessments to the public. It allows for fines of up to $1 million per violation. Jack Clark, co-founder of AI company Anthropic, called the law "a strong framework that balances public safety with continued innovation." The industry still hopes for a federal framework that would replace the California law, as well as others like it enacted recently in Colorado and New York. Last year, a bid by some Republicans in the U.S. Congress to block states from regulating AI was voted down in the Senate 99-1. "The biggest danger of SB 53 is that it sets a precedent for states, rather than the federal government, to take the lead in governing the national AI market – creating a patchwork of 50 compliance regimes that startups don't have the resources to navigate," said Collin McCune, head of government affairs at Silicon Valley venture capital firm Andreessen Horowitz. U.S. Representative Jay Obernolte, a California Republican, is working on AI legislation that could preempt some state laws, his office said, although it declined to comment further on pending legislation. Some Democrats are also discussing how to enact a federal standard. "It's not whether we're gonna regulate AI, it's do you want 17 states doing it, or do you want Congress to do it?" U.S. Representative Ted Lieu, a Democrat from Los Angeles, said at a recent hearing on AI legislation in the U.S. House of Representatives. (Reporting by Jody Godoy in New York; Editing by Chris Sanders and Edmund Klamann)
(The article has been published through a syndicated feed. Except for the headline, the content has been published verbatim. Liability lies with original publisher.)
ZURICH/GDANSK, Sept 30 (Reuters) - Here are some of the main factors that may affect…
(Reuters) -OpenAI generated around $4.3 billion in revenue in the first half of 2025, about…
VIDEO SHOWS: CHELSEA TRAINING AHEAD OF CHAMPIONS LEAGUE MATCH AT HOME TO BENFICA SHOWS: COBHAM,…
Toyota Motor Corp: * TOYOTA - LAUNCHES STRATEGIC INVESTMENT SUBSIDIARY AND WOVEN CAPITAL FUND II…
Sept 30 (Reuters) - * EXXON CHIEF SOUGHT SECURITY ASSURANCES FROM MOZAMBIQUE PRESIDENT AT UN…
Sept 30 (Reuters) - Ryzon Materials Ltd: * SIGNING OF EXPANDED CONTRACT FOR DETAILED ENGINEERING…