Compliance brief / US state law

Texas TRAIGA. Prohibition first. Sandbox second.

The Texas Responsible AI Governance Act took effect January 1, 2026. Where Colorado regulates how you build and document AI, Texas tells you which AI uses are simply illegal in the state. Here's what TRAIGA prohibits, who it covers, and the evidence the Texas AG will ask for.

TLDR

On this page

  1. 01 Who this applies to
  2. 02 Key dates and timeline
  3. 03 What TRAIGA prohibits
  4. 04 What you must do
  5. 05 Evidence the AG will ask for
  6. 06 How Northbeams maps to this
  7. 07 FAQ

01 / Who this applies to

Develop, distribute, or deploy AI in Texas.

TRAIGA covers three roles. You're in scope as a developer if you build or substantially modify AI systems made available in Texas. You're a distributor if you make those systems available through a marketplace, a procurement contract, or a reseller channel. You're a deployer if you operate AI systems whose outputs reach Texans.

Headquarters location is not the deciding factor. A California SaaS vendor whose product is used by a Texas school district is in scope. A New York staffing firm whose AI screening tool ranks Texas applicants is in scope. Operating in or directing AI outputs at Texas is the trigger.

Texas state agencies are also covered, with additional restrictions on government use of AI for social scoring, biometric identification in public spaces, and unconstitutional surveillance.

Companies under a federal preemption defense (national security, certain federal-contracted systems) have their own carve-outs. The TRAIGA carve-out list is narrower than industry hoped for during the 2025 legislative session.

02 / Key dates and timeline

From signing to first enforcement.

Signed

June 2025 by Gov. Greg Abbott (HB 149)

Effective

January 1, 2026

Enforcement begins

Notice and cure period applies in many cases before suit

AI Council seated

Standing body with rulemaking and advisory authority

Texas became the first major US state AI law to take effect, one month before the Colorado AI Act. Most national-scope companies are working through both at the same time, plus the EU AI Act, plus state privacy law overlays.

The federal executive order signed in December 2025 directs the US Attorney General to challenge state AI laws on commerce-clause and federal-preemption grounds. Litigation is foreseeable. Until courts or Congress act, TRAIGA remains enforceable and you must comply.

03 / What TRAIGA prohibits

A list of categorical bans, not a risk framework.

TRAIGA names prohibited AI uses. Each prohibition has its own scope and definitions. The five most operationally relevant:

1. Behavioral manipulation toward harm.

You cannot deploy AI systems intended to manipulate human behavior in a way that materially encourages self-harm, criminal activity, or violence. Intent matters; an AI tool that incidentally surfaces dangerous content is treated differently than one designed to push it.

2. Unlawful discrimination.

You cannot use AI to discriminate on the basis of constitutionally protected characteristics. The cleanest analog is Title VII discrimination for employment plus the Texas Constitution's protected classes. Disparate-impact analysis is part of the test.

3. Government social scoring.

State agencies and contractors cannot operate AI systems that score Texans on the basis of their personal characteristics or social behavior to determine access to benefits, services, or rights. This mirrors a similar EU AI Act prohibition.

4. Non-consensual intimate imagery and CSAM.

AI systems that generate or distribute non-consensual intimate imagery, synthetic CSAM, or substantially similar content are prohibited outright. There is no sandbox carve-out for these uses.

5. Unconstitutional government surveillance.

Government use of AI for biometric identification in public spaces, mass surveillance without warrant, and similar Fourth-Amendment-implicating activities is constrained. Private-sector surveillance products sold to government have their own evidence obligations.

The list is not exhaustive. TRAIGA leaves room for the Texas AI Council and the AG to flag additional categories during rulemaking. Read the statute or talk to counsel for the lines that fit your stack.

04 / What you must do

Inventory. Disclose. Audit. Sandbox if useful.

TRAIGA is prohibition-first, not risk-management-first. Your obligations split into four buckets.

1. Inventory the AI in scope.

You cannot prove a TRAIGA prohibition was not violated by an AI system you didn't know existed. Every Texas-touching company should run a current inventory of AI tools across employee browsers, desktop apps, and CLI agents. The inventory should include category (productivity, generative, decision-support, surveillance), data classes processed, and policy state.

2. Disclose AI use to Texans, where applicable.

Where the law requires consumer or employee notice (notably for hiring, government services, and certain healthcare uses), give the notice in plain language at or before the point of decision. Allow a path to human review.

3. Run a discrimination audit.

For each AI system that influences employment, lending, housing, or other consequential outcomes touching Texas, document your disparate-impact analysis. A signed and dated discrimination audit is the most-asked-for evidence in early enforcement matters.

4. Use the sandbox if it fits.

If you're testing a novel AI capability that might brush against a prohibition while you study the data, the sandbox program is worth evaluating. Sandbox participation is opt-in, comes with reporting obligations, and gives you reduced regulatory exposure for a defined period. The sandbox does not authorize prohibited uses; it lowers risk on borderline ones.

05 / Evidence the AG will ask for

Inventory, audit, decision log.

The Texas AG's office has signaled what evidence comes up first in TRAIGA inquiries.

  1. The current AI inventory. Per-user, per-tool, per-surface (browser, desktop, CLI). Time-stamped. Updated continuously, not quarterly.
  2. The disparate-impact analysis. Signed, dated, methodology disclosed, results documented. Annual minimum.
  3. The consumer notice and human-review record. What the notice says, when it fired, who exercised human review.
  4. Tamper-evident decision log. Per-decision records of AI use with categorization. SHA-256 signed CSV exports are the bar; an unsigned spreadsheet is not.
  5. Sandbox reports if you participate. Whatever the program-specific reporting mandates require.

06 / How Northbeams maps to this

Inventory, classification, signed evidence.

TRAIGA assumes you know which AI tools your team uses and what they do. Most companies don't. Northbeams answers that across browser, desktop, and CLI, then produces the audit-ready evidence pack.

Inventory

Continuous discovery across browser, desktop, and CLI.

Every AI tool your team uses appears in the dashboard, dated and categorized. No quarterly survey, no Notion doc that goes stale.

Per-tool policy

Sanctioned, sandboxed, or blocked.

State changes are timestamped and signed. The AG's "what controls do you operate?" question has an export attached.

Decision log

Immutable signed event log.

SHA-256 signed CSV exports. Tamper-evident retention. Pre-mapped to SOC 2 CC7.2 and ISO 27001 A.12.4.

Discrimination audit input

Per-user activity for fairness analysis.

The per-user, per-tool, per-category data your discrimination auditor needs for disparate-impact testing. Without prompt content. On-device classifier preserves privacy.

If you're a Texas-touching company that needs a defensible answer for the AG, Sentinel is the tier you'd buy. See the audit-ready evidence pack →

07 / FAQ

Common questions about Texas TRAIGA.

Does Texas TRAIGA apply to my company if we're not based in Texas?
TRAIGA reaches companies that develop, distribute, or deploy AI in Texas, plus companies whose AI systems produce outputs targeted at Texas residents. Where the company is headquartered is not the deciding factor. If you have customers, employees, or operations in Texas, assume coverage and verify with counsel.
What does TRAIGA actually prohibit?
TRAIGA prohibits AI uses that intentionally manipulate behavior to encourage self-harm or criminal activity, that discriminate on the basis of constitutionally protected characteristics, that perform government social scoring, that produce non-consensual intimate imagery or CSAM, and that conduct unconstitutional government surveillance. Each prohibition has its own scope and definitions.
Is there a regulatory sandbox?
Yes. TRAIGA creates a sandbox program for testing AI systems with reduced regulatory exposure for a defined period. Participation is opt-in, includes reporting obligations, and is administered by the state. The sandbox is not a license to violate the prohibitions.
Who enforces TRAIGA?
The Texas Attorney General has primary enforcement authority. Civil penalties apply per violation. There is no private right of action. Companies receive notice and a cure period before enforcement in many cases. Read the statute or talk to counsel for the specific procedural rules.
How does TRAIGA compare to the Colorado AI Act?
TRAIGA is prohibition-driven: it lists categories of AI use that are illegal in Texas. Colorado is risk-management-driven: it requires impact assessments and a written program for high-risk AI in named domains. Most multi-state companies need to comply with both at once.
What is the Texas AI Council?
TRAIGA establishes a state-level AI Council to advise on AI policy, recommend updates to the law, and coordinate with the AG on enforcement priorities. Industry has standing to engage with the Council during rulemaking.

Defensible answer for the Texas AG. By Friday.

Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export.