Compliance brief / international law

EU AI Act. Risk tiers, Article 4, and 7% of revenue.

The EU AI Act (Regulation 2024/1689) is the most aggressive AI law globally. It uses risk tiers, applies extraterritorially the way the GDPR does, and carries penalties up to 7% of global revenue. High-risk system obligations took effect February 2, 2026. Full applicability is August 2, 2026. Here's what non-EU companies actually need to do.

TLDR

On this page

  1. 01 Who's covered
  2. 02 The four risk tiers
  3. 03 Article 4 AI literacy
  4. 04 High-risk system obligations
  5. 05 Key dates and timeline
  6. 06 How Northbeams maps to this
  7. 07 FAQ

01 / Who's covered

Provider, deployer, importer, distributor, or product manufacturer.

The EU AI Act assigns roles. Your obligations depend on which role you play with respect to a given AI system.

The Act is extraterritorial. A non-EU SaaS company whose product is used by an EU enterprise is a provider for that purpose. A US staffing firm whose AI screening tool ranks EU candidates is a deployer. A retailer in Berlin reselling a US-built AI tool is an importer. Headquarters location is not the deciding factor.

02 / The four risk tiers

Unacceptable. High. Limited. Minimal.

The Act sorts AI systems into four tiers based on the risks they pose. Each tier has its own obligations.

Unacceptable risk: banned.

Prohibited under Article 5. The list includes social scoring by public authorities, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions), exploitation of vulnerabilities, manipulative subliminal techniques, predictive policing based solely on profiling, untargeted scraping of facial images for biometric databases, and emotion recognition in workplace and education contexts (with narrow medical and safety exceptions). These prohibitions took effect on February 2, 2025.

High risk: heaviest obligations.

Two routes lead to high-risk classification. Annex III lists named high-risk uses: biometric identification, critical infrastructure, education, employment, essential services (including credit), law enforcement, migration and border control, administration of justice, and democratic processes. Article 6(1) treats AI systems that are safety components of regulated products (machinery, medical devices, toys, vehicles) as high-risk. High-risk obligations under Annex III applied from February 2, 2026; Article 6(1) obligations follow on August 2, 2026.

Limited risk: transparency duties.

Limited-risk AI systems (chatbots, emotion recognition, biometric categorization, deepfakes, AI-generated content) carry transparency obligations: users must know they are interacting with AI, AI-generated content must be labeled, deepfakes of real people must be disclosed.

Minimal risk: no specific obligation.

Most AI systems fall here: spam filters, recommendation engines, AI-assisted productivity tools without consequential decisions. The Act encourages voluntary codes of conduct but imposes no specific obligation. Article 4 AI literacy still applies.

03 / Article 4 AI literacy

The article almost every company underestimates.

Article 4 is short, plainly written, and broadly applicable. It says providers and deployers of AI systems must take measures to ensure, to the best of their ability, a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI on their behalf.

Three things make Article 4 the obligation companies most often miss:

Concretely, an Article 4 evidence file usually contains: a written AI literacy program, a record of which employees completed which training, the dated AI policy, the inventory of AI tools the program covers, and the named owner of the program. The Northbeams audit log shows which AI tools were actually used by which employees during the period in question, which is the inventory side of the evidence.

04 / High-risk system obligations

Risk management, data quality, transparency, oversight, accuracy.

High-risk AI systems carry the heaviest obligations under the Act. The list of duties is long; the operational shape is roughly:

Deployers of high-risk AI also have specific duties (Article 26): assign human oversight, ensure input data is relevant, monitor operation, keep logs, inform affected persons in some cases, and conduct fundamental rights impact assessments where the deployer is a public body or provides public services.

05 / Key dates and timeline

Phased applicability through August 2026.

Adopted

June 13, 2024 (Regulation 2024/1689)

Entered into force

August 1, 2024

Prohibited practices + Article 4

February 2, 2025

GPAI + governance

August 2, 2025

Annex III high-risk obligations

February 2, 2026

Full applicability (Article 6(1))

August 2, 2026

The phased timeline matters because each phase added a new bucket of obligations. The order also reveals the European Commission's priorities: first ban the worst practices and require AI literacy, then govern general-purpose AI models, then enforce the heavy high-risk system stack, then close the loop on regulated products.

06 / How Northbeams maps to this

Article 4 inventory. Article 12 logs. Article 14 oversight.

Three EU AI Act articles drive most of the operational evidence load. Northbeams produces the data the auditor needs for each.

Article 4 AI literacy

Audit-ready inventory of AI tools in use.

The literacy program needs a defensible list of which tools the program covers. Northbeams produces it across browser, desktop, and CLI. By user, by date.

Article 12 record-keeping

Immutable signed event log.

SHA-256 signed CSV exports. Tamper-evident retention. Pre-mapped to SOC 2 CC7.2 and ISO 27001 A.12.4.

Article 13 transparency

Per-tool category and policy state.

What each AI tool does, what data category it touches, and which policy applies. The information you'd cite in instructions for use.

Article 14 human oversight

Per-tool policy: sanctioned, sandboxed, or blocked.

Human-set state changes are timestamped and signed. The override path is in the dashboard, not buried in a vendor backend.

For the article-by-article checklist, the EU AI Act readiness PDF walks through what evidence applies to non-EU SMBs and where Northbeams fits. Free.

07 / FAQ

Common questions about the EU AI Act.

Does the EU AI Act apply to my company if we're not based in the EU?
Yes if your AI system is placed on the market or put into service in the EU, or if its output is used in the EU. The Act follows the same extraterritorial pattern as the GDPR. Headquarters location does not determine scope; whether your AI affects EU users does.
What is Article 4 AI literacy?
Article 4 requires providers and deployers of AI systems to ensure their staff and other relevant persons have a sufficient level of AI literacy, taking into account technical knowledge, the context of AI use, and the persons affected. Article 4 took effect on February 2, 2025. It applies to almost every organization using AI in the EU, not just providers of high-risk systems.
What counts as "high-risk" AI?
Annex III lists the named high-risk uses: biometric identification, critical infrastructure, education, employment, essential private and public services (including credit), law enforcement, migration and border control, administration of justice, and democratic processes. AI systems that are safety components of regulated products (machinery, medical devices, toys) are also treated as high-risk.
What are the penalties under the EU AI Act?
Up to €35 million or 7% of global annual turnover for prohibited AI practices. Up to €15 million or 3% for most other infringements (high-risk system obligations, governance, transparency). Up to €7.5 million or 1.5% for supplying incorrect information to authorities. Member states set the lower bounds. SMBs and startups receive proportionally lower fines under specific provisions.
When does the EU AI Act actually become enforceable?
Phased. February 2, 2025: prohibited AI practices and Article 4 AI literacy. August 2, 2025: governance and general-purpose AI model obligations. February 2, 2026: high-risk AI system requirements under Annex III. August 2, 2026: high-risk AI under Article 6(1) (safety components of regulated products) and full applicability.
Is there a Northbeams resource for EU AI Act readiness?
Yes. The Northbeams EU AI Act readiness PDF is a 7-page printable checklist that maps the relevant articles to the evidence Northbeams produces. Free; emailed to your inbox.

Article 4 evidence in your inbox. By Friday.

Free 7-page PDF. Article-by-article checklist mapped to the evidence Northbeams produces.