New York Passes Landmark RAISE Act: What the New AI Safety Rules Mean for Tech Giants
Governor Kathy Hochul has signed the Responsible AI Safety and Education (RAISE) Act into law, establishing a first-of-its-kind regulatory framework in New York that mandates strict safety protocols and incident reporting for developers of advanced "frontier" AI models.
A New Era of Accountability for Frontier AI
In a move that signals a hardening stance against the unchecked growth of "black box" algorithms, New York Governor Kathy Hochul officially signed the Responsible AI Safety and Education (RAISE) Act into law this week. The legislation marks one of the most significant state-level interventions in the artificial intelligence sector to date, specifically targeting the developers of "frontier models"—the most powerful and resource-intensive AI systems currently in existence.
The RAISE Act isn't just a symbolic gesture; it is a direct response to growing concerns over the potential for catastrophic risks associated with advanced AI. By creating a mandatory framework for safety and transparency, New York is positioning itself alongside California as a primary regulator of the domestic tech industry. According to official statements from the Governor’s office, the law seeks to balance the undeniable economic potential of AI with the "moral and civic necessity" of public safety.
Defining the Scope: Who Falls Under the RAISE Act?
Unlike earlier drafts of the bill that relied on complex "compute-cost" thresholds, the final version of the RAISE Act uses a more straightforward financial trigger. The law applies to AI developers with more than $500 million in annual revenue who operate or deploy frontier models within the state of New York. This ensures that the regulatory burden falls squarely on industry titans like OpenAI, Google, and Meta, rather than stifling the innovation of smaller startups or academic researchers.
At the heart of the law is the requirement for "Large Developers" to maintain a written, publicly accessible safety and security protocol. These protocols must detail exactly how the company identifies and mitigates risks that could lead to "critical harm." In the eyes of New York lawmakers, critical harm is defined with chilling specificity: any event resulting in the death or serious injury of 100 or more people, or one that causes at least $1 billion in monetary or property damage.
The 72-Hour Rule: A Stricter Standard Than California
One of the most talked-about provisions in the RAISE Act is its aggressive incident reporting timeline. While California’s similar SB 53 allows for a 15-day window to report safety failures, New York is demanding accountability in near-real-time. Developers must notify the state within 72 hours of determining that a "safety incident" has occurred—defined as any event that significantly increases the risk of critical harm.
To oversee these new requirements, the Act establishes a dedicated oversight office within the New York Department of Financial Services (DFS). This office will be responsible for:
- Reviewing annual safety audits submitted by developers.
- Issuing formal regulations and guidance for emerging AI technologies.
- Publishing an annual public report on the state of AI safety risks.
- Collecting fees from covered developers to fund the oversight operations.
The Looming Conflict Between State and Federal Power
While New York celebrates this legislative win, the RAISE Act enters a complicated legal landscape. The bill was signed just days after a new federal Executive Order was issued, aiming to limit state-level AI regulations in favor of a "minimally burdensome" national standard. This sets the stage for a potential high-stakes legal battle between Albany and Washington D.C. over who has the ultimate authority to police the digital frontier.
Industry experts at DLA Piper note that while the law doesn't take full effect until January 1, 2027, the "shadow effect" is already being felt. Companies are likely to begin standardizing their safety protocols now to avoid a patchwork of 50 different state laws. For developers, the message is clear: the days of moving fast and breaking things without oversight are officially coming to an end in the Empire State.
As the tech world looks toward 2026, the implementation of the RAISE Act will serve as a crucial test case. If New York can successfully enforce these rules without driving innovation to more lenient jurisdictions, it may well provide the blueprint for a future national AI safety framework.

