If you use AI to make hiring, lending, insurance, housing, healthcare, education, or government-services decisions about Colorado consumers, the Colorado AI Act covers you. It took effect February 1, 2026. Here's what you must do, what evidence the AG will ask for, and where Northbeams fits.
01 / Who this applies to
The Colorado AI Act distinguishes developers from deployers. Most small and mid-sized companies are deployers. You build no AI of your own; you buy AI tools and feed them data to make decisions.
You're a deployer if you use a high-risk AI system to make, or be a substantial factor in, a consequential decision about a Colorado consumer. The statute defines a consequential decision as one with a material legal or similarly significant effect on a consumer's access to:
If your team uses ChatGPT to draft a marketing email, you're not covered. If your team uses a vendor product that scores resumes, ranks loan applicants, sets premiums, allocates housing, or triages clinical care, you almost certainly are.
You're a developer if you build or intentionally and substantially modify a high-risk AI system. Re-prompting an off-the-shelf model with a system prompt is generally not "substantial modification." Fine-tuning a model on your proprietary data probably is. Read the law or talk to counsel for the line that fits your stack.
02 / Key dates and timeline
Signed
May 17, 2024 by Gov. Jared Polis (SB 24-205)
Effective
February 1, 2026
First full reporting cycle
Annual impact assessments due within one year of deployment
Algorithmic-discrimination report
Within 90 days of discovery, to the Colorado AG
The Colorado AI Act became the first US state consumer-protection AI law to take effect. Texas TRAIGA went live one month earlier on January 1, 2026, but Colorado's scope is broader and most legal analysts treat it as the bellwether.
The federal executive order signed in December 2025 directed the US Attorney General to challenge state AI laws on commerce-clause and federal-preemption grounds. An EO can't overturn state law on its own. Until courts or Congress act, the Colorado AI Act remains enforceable and you must comply.
03 / What "high-risk AI" means here
"High-risk artificial intelligence system" under the Colorado AI Act is any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. The eight domains in section 01 above (employment, lending, healthcare, etc.) are the named scope.
Several common AI uses are excluded from "high-risk" by the statute, including:
The statutory carve-outs matter. A vendor that helps you process resumes is in scope. A vendor that runs your spam filter is not. The line is whether the AI influences the consequential decision itself, not whether the AI is "smart" by some other measure.
The law also names generative AI as a developer category. If a generative AI tool is used to make a consequential decision, the deployer is on the hook, not just the model provider.
04 / What you must do
Deployer obligations break into four buckets.
Adopt a written risk management policy and program for each high-risk AI system you deploy. The program must be iterative, must be reviewed annually, and must specify the principles, processes, and personnel that govern AI use. The Colorado AG accepts NIST AI RMF, ISO/IEC 42001, or another nationally or internationally recognized risk-management framework as the basis for the program. Aligning to one of these frameworks gives you a rebuttable presumption that you used reasonable care.
For each high-risk AI deployment, run an impact assessment at least once a year and within 90 days of any intentional and substantial modification. The assessment must cover the system's purpose and use, known limitations, the categories of data it processes, the metrics used to evaluate performance, post-deployment monitoring procedures, and the categories of consumers affected. Keep impact assessments for three years after the last deployment.
Before or at the time of a consequential decision, give the consumer a plain-language statement that high-risk AI was used, describe the AI in general terms, and explain the decision's purpose and effect. If the decision is adverse to the consumer, give them an explanation of the principal reasons, the data used, and the right to correct the data. Allow the consumer to appeal the decision through human review when feasible.
If you discover that a high-risk AI system you deploy has caused algorithmic discrimination, report it to the Colorado Attorney General within 90 days. "Algorithmic discrimination" means unlawful differential treatment or impact based on a protected class. The duty applies regardless of the deployer's size.
Developers have parallel obligations: provide documentation to deployers about intended uses, training-data provenance, performance metrics, and known harms; report substantial modifications; report algorithmic discrimination; publish a public summary of the high-risk AI systems they make available.
05 / Evidence the AG will ask for
The Colorado Attorney General's office has signaled what evidence it expects when investigating a complaint or a reported incident. Five artifacts come up first.
Documentation that the AI was deployed before the policy was written is the most common gap. Backdating a policy is worse than not having one. Build the policy first, then turn the AI on.
06 / How Northbeams maps to this
The Colorado AI Act assumes you know which AI tools your team uses, what data they process, and who reviewed each consequential decision. Most companies don't. Northbeams answers those three questions across browser, desktop, and CLI, then produces the audit-ready evidence pack the AG asks for.
Risk management program
Per-tool policy state with attribution and timestamps. Aligns to NIST AI RMF and ISO/IEC 42001 control language.
Impact assessment input
Discovery refreshed continuously across browser, desktop, and CLI. Categories include credentials, PII, source code, customer data, and contracts.
Consumer-decision audit log
SHA-256 signed CSV exports. Tamper-evident retention. Pre-mapped to SOC 2 CC7.2 and ISO 27001 A.12.4.
Algorithmic-discrimination triage
Tool sprawl trend, incident count by severity, and policy-change history. Board-ready PDF straight out of the dashboard.
If you're a deployer who needs a defensible answer for the Colorado AG, Sentinel is the tier you'd buy. See the audit-ready evidence pack →
07 / FAQ
Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export.