01 / Who's covered
Companies deploying AI in New York.
Like Colorado and Texas, the RAISE Act applies based on where the AI's outputs reach, not where the company is. If your AI tool affects New York consumers or employees, you should assume scope and verify with counsel.
The 2026 amendments removed an earlier draft's broad-brush approach and concentrated the law on two pillars: transparency and incident reporting. Risk-management requirements that earlier drafts proposed (impact assessments, written programs) were dropped or softened, partly to align with the federal preemption posture and partly to reduce overlap with the Colorado AI Act.
02 / Two operational duties
Disclose. Report.
1. Transparency.
When AI is used to make a decision that affects a consumer (employment, credit, housing, education, healthcare access, or other significant categories), the deployer must disclose the AI use in plain language. The disclosure must:
- Identify that AI was used.
- Describe the role AI played in the decision.
- Provide a path to human review where appropriate.
- Be accessible (plain language, available at or before the decision point).
2. Incident reporting.
If an AI system you deploy causes material consumer harm or algorithmic discrimination in New York, you must notify the New York Attorney General within a defined window. The notification should include:
- The AI system involved (vendor, version, scope of deployment).
- The nature of the harm (consumer-facing, financial, discrimination).
- The estimated number of affected consumers.
- The remediation steps you've taken or planned.
The amendments aligned the reporting window with similar windows in other state AI laws so multi-state companies can run one incident-response playbook.
03 / What counts as a reportable incident
Material consumer harm or algorithmic discrimination.
The bar is "material consumer harm" or "algorithmic discrimination." Material harm typically means financial loss, denial of service or benefits, exposure of sensitive data, or physical safety implications. Algorithmic discrimination follows the same conceptual line as the Colorado AI Act: differential treatment or impact based on a protected class, caused or substantially contributed to by an AI system.
Three patterns trigger reporting most often:
- An AI hiring or scoring tool produces statistically significant disparate-impact outcomes that the company discovers in audit.
- A generative AI tool produces consumer-facing communications containing inaccurate or harmful information that materially affects a consumer's actions.
- An AI fraud-detection or credit-scoring tool denies service to consumers in ways the company later determines were not justified.
04 / Evidence the NY AG will ask for
Inventory. Disclosure record. Incident response.
- The AI inventory. Per-tool, per-user, per-surface (browser, desktop, CLI). Time-stamped. Updated continuously.
- The consumer-disclosure record. What the disclosure says, when it fires, the workflow that triggers it. Per-decision evidence is best.
- The incident-response file. Detection, containment, root-cause analysis, the AG notification, the remediation plan, the post-incident review.
- The signed event log. SHA-256 signed CSV exports that show the AI used, when, and the resulting consumer outcome at a category level.
05 / How Northbeams maps to this
Inventory + signed log + incident triage.
AI inventory
Continuous discovery across browser, desktop, and CLI.
The list of AI tools your team uses, dated and categorized. The starting point for any disclosure or incident workflow.
Disclosure trigger data
Per-decision tool attribution.
Which AI tool your team used during a given workflow, by user, by category. The data your disclosure procedure needs.
Incident triage input
Tamper-evident timeline.
SHA-256 signed CSV exports. When the New York AG asks "what did your team know and when," the timeline writes itself.
Quarterly board review
Auto-generated executive risk-audit PDF.
Tool sprawl, incident count, policy-change history. The same document you'd hand the board, suitable for the AG.
If you're a multi-state company already covering the Colorado AI Act, the New York RAISE Act usually fits inside the same evidence pack. See the audit-ready evidence pack →
06 / FAQ
Common questions about the New York RAISE Act.
- What is the New York RAISE Act?
- The New York Responsible AI Safety and Education Act (RAISE) was signed by Governor Kathy Hochul in December 2025. The 2026 amendments narrowed the focus to transparency and incident reporting for AI systems used in New York. It is one of the more enforcement-flexible state AI laws compared to Colorado or Texas.
- Who is covered by the New York RAISE Act?
- Companies deploying AI in New York. Like other state AI laws, scope follows where the AI system's outputs reach, not where the company is headquartered. If your AI tool affects New York consumers or employees, expect coverage.
- What does the RAISE Act actually require?
- Two operational duties stand out. Transparency: disclose to consumers when AI is used to make a decision affecting them and provide enough information for them to understand the role of AI. Incident reporting: notify the New York Attorney General within a defined window of any AI incident that causes material consumer harm or algorithmic discrimination.
- How does the RAISE Act compare to the Colorado AI Act?
- Narrower scope. The RAISE Act focuses on transparency and incident reporting; Colorado adds risk management programs and annual impact assessments. Most multi-state companies build the Colorado-grade program once and find that it satisfies New York's narrower obligations along the way.
- Are there penalties under the RAISE Act?
- Yes. The New York Attorney General has enforcement authority with civil penalties per violation. Some categories of violations also unlock private rights of action under New York's general consumer protection statutes when paired with a RAISE Act breach.