Compliance brief / US state law

Colorado AI Act. The deployer's field guide.

If you use AI to make hiring, lending, insurance, housing, healthcare, education, or government-services decisions about Colorado consumers, the Colorado AI Act covers you. It took effect February 1, 2026. Here's what you must do, what evidence the AG will ask for, and where Northbeams fits.

TLDR

On this page

  1. 01 Who this applies to
  2. 02 Key dates and timeline
  3. 03 What "high-risk AI" means here
  4. 04 What you must do
  5. 05 Evidence the AG will ask for
  6. 06 How Northbeams maps to this
  7. 07 FAQ

01 / Who this applies to

You're a deployer if your AI touches a consequential decision.

The Colorado AI Act distinguishes developers from deployers. Most small and mid-sized companies are deployers. You build no AI of your own; you buy AI tools and feed them data to make decisions.

You're a deployer if you use a high-risk AI system to make, or be a substantial factor in, a consequential decision about a Colorado consumer. The statute defines a consequential decision as one with a material legal or similarly significant effect on a consumer's access to:

If your team uses ChatGPT to draft a marketing email, you're not covered. If your team uses a vendor product that scores resumes, ranks loan applicants, sets premiums, allocates housing, or triages clinical care, you almost certainly are.

You're a developer if you build or intentionally and substantially modify a high-risk AI system. Re-prompting an off-the-shelf model with a system prompt is generally not "substantial modification." Fine-tuning a model on your proprietary data probably is. Read the law or talk to counsel for the line that fits your stack.

02 / Key dates and timeline

May 2024 to February 2026.

Signed

May 17, 2024 by Gov. Jared Polis (SB 24-205)

Effective

February 1, 2026

First full reporting cycle

Annual impact assessments due within one year of deployment

Algorithmic-discrimination report

Within 90 days of discovery, to the Colorado AG

The Colorado AI Act became the first US state consumer-protection AI law to take effect. Texas TRAIGA went live one month earlier on January 1, 2026, but Colorado's scope is broader and most legal analysts treat it as the bellwether.

The federal executive order signed in December 2025 directed the US Attorney General to challenge state AI laws on commerce-clause and federal-preemption grounds. An EO can't overturn state law on its own. Until courts or Congress act, the Colorado AI Act remains enforceable and you must comply.

03 / What "high-risk AI" means here

Decision-influencing AI in eight named domains.

"High-risk artificial intelligence system" under the Colorado AI Act is any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. The eight domains in section 01 above (employment, lending, healthcare, etc.) are the named scope.

Several common AI uses are excluded from "high-risk" by the statute, including:

The statutory carve-outs matter. A vendor that helps you process resumes is in scope. A vendor that runs your spam filter is not. The line is whether the AI influences the consequential decision itself, not whether the AI is "smart" by some other measure.

The law also names generative AI as a developer category. If a generative AI tool is used to make a consequential decision, the deployer is on the hook, not just the model provider.

04 / What you must do

Run a risk program. Document. Notify. Report.

Deployer obligations break into four buckets.

1. Build a risk management program.

Adopt a written risk management policy and program for each high-risk AI system you deploy. The program must be iterative, must be reviewed annually, and must specify the principles, processes, and personnel that govern AI use. The Colorado AG accepts NIST AI RMF, ISO/IEC 42001, or another nationally or internationally recognized risk-management framework as the basis for the program. Aligning to one of these frameworks gives you a rebuttable presumption that you used reasonable care.

2. Run an annual impact assessment.

For each high-risk AI deployment, run an impact assessment at least once a year and within 90 days of any intentional and substantial modification. The assessment must cover the system's purpose and use, known limitations, the categories of data it processes, the metrics used to evaluate performance, post-deployment monitoring procedures, and the categories of consumers affected. Keep impact assessments for three years after the last deployment.

3. Notify and explain to consumers.

Before or at the time of a consequential decision, give the consumer a plain-language statement that high-risk AI was used, describe the AI in general terms, and explain the decision's purpose and effect. If the decision is adverse to the consumer, give them an explanation of the principal reasons, the data used, and the right to correct the data. Allow the consumer to appeal the decision through human review when feasible.

4. Report algorithmic discrimination.

If you discover that a high-risk AI system you deploy has caused algorithmic discrimination, report it to the Colorado Attorney General within 90 days. "Algorithmic discrimination" means unlawful differential treatment or impact based on a protected class. The duty applies regardless of the deployer's size.

Developers have parallel obligations: provide documentation to deployers about intended uses, training-data provenance, performance metrics, and known harms; report substantial modifications; report algorithmic discrimination; publish a public summary of the high-risk AI systems they make available.

05 / Evidence the AG will ask for

Five artifacts make the difference.

The Colorado Attorney General's office has signaled what evidence it expects when investigating a complaint or a reported incident. Five artifacts come up first.

  1. The risk management policy. A signed, dated, and version-controlled document that names the framework you follow (NIST AI RMF or ISO/IEC 42001), the principles you apply, the personnel responsible, and the review cadence.
  2. The impact assessment for each high-risk AI deployment. Annual at minimum. Updated within 90 days of substantial modification. Three-year retention.
  3. The consumer notice and explanation procedure. A copy of the notice consumers receive, the workflow that triggers it, and the audit log of when it fired.
  4. The audit log of high-risk AI use. Per-decision records of which AI system was used, what data it processed, what category the decision belonged to, who reviewed adverse outcomes, and when. Tamper-evident retention is the bar.
  5. Algorithmic-discrimination reporting record. Any report you filed with the AG within 90 days of discovery, plus the internal investigation documentation that led to it.

Documentation that the AI was deployed before the policy was written is the most common gap. Backdating a policy is worse than not having one. Build the policy first, then turn the AI on.

06 / How Northbeams maps to this

Inventory, classification, evidence pack.

The Colorado AI Act assumes you know which AI tools your team uses, what data they process, and who reviewed each consequential decision. Most companies don't. Northbeams answers those three questions across browser, desktop, and CLI, then produces the audit-ready evidence pack the AG asks for.

Risk management program

One dashboard for sanctioned, sandboxed, and blocked AI use.

Per-tool policy state with attribution and timestamps. Aligns to NIST AI RMF and ISO/IEC 42001 control language.

Impact assessment input

Inventory by user, time, surface, and category.

Discovery refreshed continuously across browser, desktop, and CLI. Categories include credentials, PII, source code, customer data, and contracts.

Consumer-decision audit log

Immutable signed event log.

SHA-256 signed CSV exports. Tamper-evident retention. Pre-mapped to SOC 2 CC7.2 and ISO 27001 A.12.4.

Algorithmic-discrimination triage

Quarterly executive risk report.

Tool sprawl trend, incident count by severity, and policy-change history. Board-ready PDF straight out of the dashboard.

If you're a deployer who needs a defensible answer for the Colorado AG, Sentinel is the tier you'd buy. See the audit-ready evidence pack →

07 / FAQ

Common questions about the Colorado AI Act.

Does the Colorado AI Act apply to my company if we don't use AI to make decisions?
The Act covers deployers and developers of high-risk AI used in consequential decisions about Colorado consumers. If your AI tools influence hiring, lending, housing, insurance, education, or healthcare decisions, you're a deployer. If your team uses ChatGPT for marketing copy, you're not.
What is a "consequential decision"?
A decision that has a material legal or similarly significant effect on a consumer's access to education, employment, essential government services, financial or lending services, healthcare services, housing, insurance, or legal services.
Do small companies have to comply?
Yes if you're a deployer of high-risk AI. Small-deployer relief is narrower than under some state privacy laws. Companies with under 50 full-time employees that don't use their own data to train the AI get reduced documentation obligations, but the algorithmic-discrimination duty applies regardless of headcount.
What's the safe harbor?
Following the NIST AI Risk Management Framework or another nationally or internationally recognized risk framework gives you a rebuttable presumption that you've used reasonable care under the Act. ISO/IEC 42001 also qualifies.
Who enforces it?
The Colorado Attorney General has exclusive enforcement authority. There is no private right of action. Violations are unfair trade practices under the Colorado Consumer Protection Act, with civil penalties per violation.
Will the federal executive order preempt the Colorado AI Act?
Not on its own. An executive order can't overturn state law. The Trump administration's December 2025 EO created an AG task force to challenge state AI laws on commerce-clause and federal-preemption grounds, but until courts or Congress act, the Colorado AI Act remains enforceable and you must comply.
What evidence does the Colorado AG actually look at?
Your risk management policy and program, your annual impact assessments for each high-risk AI deployment, your consumer notice and explanation procedures, and your record of any algorithmic-discrimination report you filed with the AG within 90 days of discovery. Documentation that the AI was deployed before the policy was written is the most common gap.

Defensible answer for the Colorado AG. By Friday.

Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export.