California has the largest stack of state AI laws in the US. SB 53 governs frontier developers. AB 2013 governs generative AI providers. SB 942 governs AI-generated content. AB 489 governs healthcare AI. SB 243 governs companion chatbots. Here's what each one requires and which one applies to you.
01 / Who's covered
California's stack is vertical, not horizontal. Each law targets a specific AI activity. Which apply to you depends entirely on what you do.
If none of those describe you, California's AI stack is mostly background context. Your bigger California exposure is probably CCPA + the Colorado AI Act + the EU AI Act, depending on where your customers and employees are.
02 / SB 53: California Transparency in Frontier AI Act
SB 53 is California's frontier AI law. It targets developers training the largest foundation models, defined by compute and spend thresholds. Most companies are not in scope.
If you are in scope, SB 53 requires you to:
SB 53's enforcement leans on disclosure and documentation rather than approval. The California AG can pursue civil penalties for material misstatements or omissions in the published risk framework.
03 / AB 2013: Training-data transparency
AB 2013 requires generative AI service providers to publish, on their public website, a summary of the data used to train their generative AI systems. The summary must describe data sources at a category level (for example, "publicly available web data," "licensed text from publishers," "user-generated content under [identified] terms"), the rough timeframes of collection, whether copyrighted material was included, and the data-cleaning practices applied.
The disclosure does not require revealing exact datasets or proprietary methods. It does require a summary detailed enough that a reader can understand at a category level what the model was trained on.
Most SMBs are downstream consumers of generative AI, not providers. AB 2013 affects you indirectly when your vendor publishes its summary; that's relevant input for your own risk assessments.
04 / SB 942: California AI Transparency Act
SB 942 requires generative AI providers serving more than 1 million California users monthly to:
Smaller services have a notification obligation, asking users to disclose when they share AI-generated content. The watermark machinery falls primarily on the largest providers.
If you republish AI-generated content (advertising agencies, content marketplaces, news organizations using AI assistance), the obligations cascade. Your contracts with AI vendors should require them to deliver content with the SB 942 disclosures intact.
05 / AB 489: Healthcare AI disclosures
AB 489 covers AI used in healthcare. The core obligations:
Healthcare is the domain with the highest overlap to other AI laws (HIPAA at the federal level, the Colorado AI Act for "consequential decisions" in healthcare, and the EU AI Act for healthcare AI in EU markets). Healthcare deployers running US-state-level audits often need an evidence pack that satisfies all four at once.
06 / SB 243: Companion chatbot safety
SB 243 covers AI companion chatbots: products marketed or used as friends, partners, therapists, or persistent emotional companions. Operators of these products must:
SB 243 sits next to several other US state laws (notably the New York RAISE Act amendments) that focus on AI-companion safety. If your product has any companion-style affordance and any California users, take this one seriously even if "companion chatbot" was not your design intent.
07 / How Northbeams maps to this
California's AI stack assumes you know which AI tools are in use, what categories of data they touch, and how you're enforcing per-tool policy. Most companies don't. Northbeams answers those three questions across browser, desktop, and CLI, then produces the audit-ready evidence pack the AG and CPPA expect.
AB 2013 vendor risk
Northbeams discovers and labels each generative AI tool your team touches. Cross-reference with the vendor's published training-data summary for your risk file.
SB 942 republish risk
If your team uses AI to draft, edit, or generate content, Northbeams logs which tools were involved and when, by user.
AB 489 healthcare evidence
Block AI tools that touch PHI from non-sanctioned destinations. Sandbox the ones that are clinically useful. Allow the ones explicitly cleared. The signed log is the evidence file.
SB 53 frontier safe-harbor
Northbeams tracks AI agents on developer laptops (Claude Code, Aider) and desktop Mac and PC apps. The audit log shows which agents touched proprietary data during model training.
If you're a California-touching company that needs a defensible answer for the AG or CPPA, Sentinel is the tier you'd buy. See the audit-ready evidence pack →
08 / FAQ
Free to discover. Pay to control. Sentinel ships the audit-ready evidence pack with one-click export.