The New York RAISE Act (Frontier AI transparency & incident reporting)
One Rule, Explained
Most AI safety laws stay deliberately vague. New York’s RAISE Act, signed in March, is unusually specific about who it covers, what they have to do, and when.
The law targets developers of “frontier models,” defined as systems trained on more than 10²⁶ FLOPs of compute, as long as the company also clears $500 million in annual revenue. That double threshold is intentional: it’s designed to catch the handful of labs actually operating at the frontier while leaving smaller developers alone.
Compliance breaks down to three concrete obligations.
Before shipping a new model or a significant update, developers must publish a transparency report describing what risks were identified and how they were handled.
They also need to maintain a public “Frontier AI Framework”: a document explaining how the company approaches catastrophic risk scenarios from its own models.
And if something goes seriously wrong, they have 72 hours to report it to the state.
Oversight sits within the New York Department of Financial Services. The AG enforces it, with civil penalties up to $1 million for a first violation and $3 million for repeat offences.
It takes effect January 1, 2027. The federal government is pushing to preempt state-level AI laws, so its fate is genuinely uncertain. But as a template for what mandatory safety disclosure could look like in the US, the RAISE Act is the most concrete thing any American jurisdiction has produced so far.
