Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
|
||||||
|
||||||
|
||||||
|
New York has taken a significant step in regulating artificial intelligence by advancing the AI Safety Bill, officially known as the RAISE Act. According to a report by TechCrunch, this bill is designed to prevent powerful AI models, such as those from OpenAI, Google, and Anthropic, from causing large-scale disasters. These include incidents that could result in the injury or death of more than 100 people or cause more than $1 billion (about $3.1 billion per person in the US) in damage.
The RAISE Act is a major win for AI safety advocates, including well-known scientists like Geoffrey Hinton and Yoshua Bengio. If the bill becomes law, it will introduce the first legally required transparency standards for frontier AI labs in the U.S.
The RAISE Act in New York differs from other proposed AI laws because it focuses only on the largest AI companies. The law targets AI models that were trained using more than $100 million in computing power and are accessible to users in New York. This makes it clear that small startups or academic researchers will not be affected.
New York State Senator Andrew Gounardes, who co-sponsored the bill, explained that the legislation is meant to protect people without slowing down innovation. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Gounardes. He added, “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.”
Unlike California’s SB 1047, which was vetoed, the RAISE Act avoids some of the more controversial rules. For example, it does not require developers to add a “kill switch” to their models. It also does not hold post-trainers of frontier AI models legally responsible for any critical harm their models might cause.
Nathan Calvin, general counsel at Encode, who helped write both bills, explained that the RAISE Act was designed to fix the mistakes of earlier AI legislation. By focusing on transparency and safety instead of overly strict control, the bill aims to balance innovation with public protection.
Even though the AI risk regulation in the New York bill is designed to only apply to major players, the tech industry has strongly pushed back. Companies like OpenAI, Google, and Meta did not comment when asked by TechCrunch, but others have voiced clear opinions.
Andreessen Horowitz partner Anjney Midha criticized the bill on social media, calling it a “stupid, stupid state-level AI bill” that he believes will hurt the U.S. in its race against global competitors. Tech investors and startup groups like Y Combinator have also opposed similar regulations, saying they may drive innovation out of the country.
Jack Clark, co-founder of Anthropic, has not taken an official position on the RAISE Act but raised concerns about its broad reach. He warned that the bill might create challenges even for smaller companies. However, Senator Gounardes responded by saying that Clark’s concerns “miss the mark,” since the bill is designed not to affect small developers.
Another concern is that AI companies might choose not to release their top models in New York. This was a fear seen with California’s SB 1047 and has already played out in Europe, where strict tech rules have caused some companies to pull back.
Still, New York Assemblymember Alex Bores, another co-sponsor of the bill, does not believe that will happen. He told TechCrunch, “The regulatory burden of the RAISE Act is relatively light,” and pointed out that New York is the third-largest economy in the U.S. He added, “I don’t want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reason for [AI companies] to not make their models available in New York.”
If Governor Kathy Hochul signs the RAISE Act into law, it will mark a major shift in tech policy on AI in the United States. It will demonstrate that a state government is capable of taking meaningful action to ensure transparency and accountability from the world’s leading AI companies.