Coverage Scorecard

How comprehensive is the AI tool catalogue?

Q2 2026 audit · Last refreshed May 6, 2026

92%
of the top-100 enterprise browser AI tools are catalogued today. The Northbeams platform also catches AI on the desktop, in the terminal, and now in the MCP path between coding agents and their tools - one product, four surfaces.
366 tools catalogued · 802 URL match patterns · 14 categories · 10 MCP servers
Browser AI: 92% of the top-100 browser tools by URL signature.
Desktop AI: 4 native apps watched on Mac and Windows - Claude Desktop, ChatGPT Desktop, Cursor, Granola.
CLI AI: 2 coding agents watched in the terminal - Claude Code, Aider.
MCP traffic: 10 catalogued MCP servers wrapped through 3 clients - Claude Desktop, Cursor, Claude Code.
Browser detection by URL pattern. Desktop and CLI detection by process name. Outbound-connection detection live for 110+ AI service hosts. MCP detection by binary fingerprint, with recommended per-tool policies. Catalogue updated continuously. Coverage scorecard published quarterly.

Northbeams detects employee AI tool use across four surfaces. The browser extension matches URL signatures and flags sensitive prompts. The Northbeams app for Mac and Windows watches outbound connections and process names on the laptop, catching desktop apps and CLI tools that never touch the browser. The MCP Gateway sits in the path between coding agents (Claude Desktop, Cursor, Claude Code) and the MCP servers they call, classifying every tool argument on-device. Whether your team's AI shows up in the dashboard depends on whether each surface is in the catalogue. We benchmark coverage quarterly against an external top-100 list compiled from industry surveys, analyst reports, and SaaS-spend benchmarks. This page is what we found.

How we scored it

The top-100 was built by merging named tools from six tiers of sources, weighted toward quantitative adoption data over directory scrapes. Where sources disagreed, professional / enterprise-segmented data won over consumer.

The full methodology, ranked top-100 list, and per-tool source attribution is published in the Q2 2026 audit report on GitHub.

Coverage by category

Each row shows the share of the audit's top-N entries that the catalogue detects on the tool's primary URL. Detection quality issues (matchers that fire but on the wrong path or scope) are excluded from the “covered” count.

CategoryTop-NCovered
Chat assistants2525 / 25
Coding assistants2523 / 25
Image generation2525 / 25
Video generation1515 / 15
Voice / audio AI1515 / 15
Agents / workflow2018 / 20
Productivity / writing2524 / 25
Sales / CRM AI1514 / 15
Customer-support AI1211 / 12
Legal AI1010 / 10
Healthcare AI108 / 10
Security AI84 / 8
Data / analytics AI1010 / 10
Research / search AI88 / 8
Design AI86 / 8

MCP server coverage

The MCP Gateway recognises any MCP server configured in Claude Desktop, Cursor, or Claude Code by fingerprinting the binary and the package reference. Ten well-known servers ship with recommended per-tool policies on day one. Anything else your team has wired up shows up by binary name and your admins set the policy.

MCP serverRiskRecommended posture
Filesystem (Anthropic)HighBlock writes by default. Read tools allowed.
GitHub (Anthropic)HighBlock delete and merge. Warn on file changes and PRs.
Slack (Anthropic)HighWarn on posts and replies. Allow lookups.
Postgres (Anthropic)HighWarn on every query. Allow schema introspection.
Puppeteer (Anthropic)HighBlock evaluate. Warn on navigate and screenshot.
Google Drive (Anthropic)HighWarn on search and read. Allow listings.
Stripe (official)HighBlock refunds and subscription cancels. Warn on payment links.
Brave Search (Anthropic)SanctionedAllow all. Read-only public search.
Memory (Anthropic)SanctionedAllow all. Local-only key-value store.
Sequential Thinking (Anthropic)SanctionedAllow all. Pure compute, no IO.

Recommended postures are starting points, not prescriptions. Admins can override any rule per workspace from the dashboard. The catalogue refreshes quarterly along with the rest of the tool catalogue.

What we don't cover yet

Transparency matters for catalogue claims, so here's what's still missing as of this audit:

How we maintain coverage

The catalogue is refreshed against the same six source families on a rolling cadence: annual reports (Menlo, Stack Overflow, Zylo, McKinsey), per-category analyst reports (Gartner, Forrester) on publication, and weekly checks against GitHub Trending and AI-funding announcements for emerging tools. New entries pass a relevance gate (≥2-source corroboration within 60 days, or ≥1 analyst-tier source) before they're added.

Source health is monitored: if any source produces zero entries for 14 days, or if the pipeline runs for 7 days without net new entries, an alert fires. The full refresh methodology is published here.

Service-level commitments

For procurement teams. If your shadow-AI risk review needs the underlying data, the full ranked top-100 list, per-category audit, and gap analysis are public on GitHub. The live catalogue endpoint is at monitor.northbeams.com/api/catalogue/v1.

Privacy Terms Contact What's new © Northbeams 2026