There is no broad federal AI law in the United States. Rules live at the state level (Colorado, Texas, California, New York, Illinois), in the EU AI Act if your AI touches EU users, and in the voluntary standards Fortune 500 procurement teams now require (ISO/IEC 42001 and NIST AI RMF). This page is the field guide. Pick the entry point that fits.
If you want a fast answer, start with one of these. Each card opens the right field-guide page.
Every law and standard covered on this site, in one table. Click a row to open the field guide.
| Law / Framework | Effective | Who's covered | Key requirement | Enforcer |
|---|---|---|---|---|
| Colorado AI Act (SB 24-205) | 2026-02-01 | Deployers and developers of high-risk AI affecting Colorado consumers | Risk program, annual impact assessments, consumer notice and explanation, 90-day discrimination report | Colorado AG |
| Texas TRAIGA | 2026-01-01 | Companies developing, distributing, or deploying AI in Texas | Prohibitions on discriminatory and manipulative AI uses | Texas AG |
| California (SB 53, AB 2013, SB 942, AB 489, SB 243) | 2026 staggered | Frontier developers, generative AI providers, healthcare AI vendors, companion chatbot operators | Risk frameworks, training-data disclosure, watermarks, healthcare disclosure, chatbot safety | California AG, CPPA |
| New York RAISE Act | 2025-12 / amended 2026-03 | Companies deploying AI in New York | Transparency and incident reporting | New York AG |
| Illinois AI in interviews | In effect | Employers using AI to analyze recorded job-interview video | Consent before use, retention limits, data destruction on request | Illinois Dept of Labor |
| EU AI Act | 2026-02 / 2026-08-02 | Providers and deployers whose AI affects EU users | Risk-tier obligations, Article 4 AI literacy, documentation, incident reporting | National competent authorities, EU AI Office |
| US federal (Trump EO + AI LEAD Act) | 2025-12 / pending | Federal agencies primarily; AI LEAD Act would create product liability framework | Preempt-state task force; pending product liability rules | US AG, federal courts |
| ISO/IEC 42001 | Voluntary | Any organization that develops or uses AI | Plan-Do-Check-Act AI management system, certifiable | Accredited certification bodies |
| NIST AI RMF | Voluntary | Any US organization that develops or uses AI | Govern, Map, Measure, Manage | Self-attest; recognized as safe harbor by Colorado and others |
| ISO 27001 to 42001 stack | Voluntary | Existing ISO 27001-certified organizations adding AI governance | Reuse 27001 management system to certify 42001 about 40% faster | Accredited certification bodies |
| ISO 42001 vs NIST AI RMF | Comparison | Anyone choosing between the two | Side-by-side and "which to pick" | n/a |
Each page is a self-contained brief. TLDR at the top, structured body, FAQ. Cite us, link us, share us.
US state laws
International law
Standards and frameworks
Most of these laws assume you already know which AI tools your team uses, what data they process, and who reviewed each consequential decision. Most companies don't know.
That gap is what Northbeams catches. One platform, three install surfaces (browser, desktop, CLI). The classifier runs on-device so original prompt content never leaves the user's machine. You get the inventory. The audit-ready evidence pack writes itself.
Free to discover. Pay to control. The fastest path from "we don't know what AI we use" to a defensible answer for any of these laws.