Coverage Scorecard
How comprehensive is the AI tool catalogue?
Desktop AI: 4 native apps watched on Mac and Windows - Claude Desktop, ChatGPT Desktop, Cursor, Granola.
CLI AI: 2 coding agents watched in the terminal - Claude Code, Aider.
MCP traffic: 10 catalogued MCP servers wrapped through 3 clients - Claude Desktop, Cursor, Claude Code.
Northbeams detects employee AI tool use across four surfaces. The browser extension matches URL signatures and flags sensitive prompts. The Northbeams app for Mac and Windows watches outbound connections and process names on the laptop, catching desktop apps and CLI tools that never touch the browser. The MCP Gateway sits in the path between coding agents (Claude Desktop, Cursor, Claude Code) and the MCP servers they call, classifying every tool argument on-device. Whether your team's AI shows up in the dashboard depends on whether each surface is in the catalogue. We benchmark coverage quarterly against an external top-100 list compiled from industry surveys, analyst reports, and SaaS-spend benchmarks. This page is what we found.
How we scored it
The top-100 was built by merging named tools from six tiers of sources, weighted toward quantitative adoption data over directory scrapes. Where sources disagreed, professional / enterprise-segmented data won over consumer.
- Menlo Ventures: 2025 State of Generative AI in the Enterprise
- Stack Overflow Developer Survey 2025: AI
- Zylo 2026 SaaS Management Index
- McKinsey QuantumBlack: State of AI 2025
- Menlo Ventures: 2025 State of AI in Healthcare
- Vals Legal AI Report (VLAIR)
- Gartner Magic Quadrant for Conversational AI Platforms
- Becker's Hospital Review: ambient scribe shares
The full methodology, ranked top-100 list, and per-tool source attribution is published in the Q2 2026 audit report on GitHub.
Coverage by category
Each row shows the share of the audit's top-N entries that the catalogue detects on the tool's primary URL. Detection quality issues (matchers that fire but on the wrong path or scope) are excluded from the “covered” count.
| Category | Top-N | Covered | |
|---|---|---|---|
| Chat assistants | 25 | 25 / 25 | |
| Coding assistants | 25 | 23 / 25 | |
| Image generation | 25 | 25 / 25 | |
| Video generation | 15 | 15 / 15 | |
| Voice / audio AI | 15 | 15 / 15 | |
| Agents / workflow | 20 | 18 / 20 | |
| Productivity / writing | 25 | 24 / 25 | |
| Sales / CRM AI | 15 | 14 / 15 | |
| Customer-support AI | 12 | 11 / 12 | |
| Legal AI | 10 | 10 / 10 | |
| Healthcare AI | 10 | 8 / 10 | |
| Security AI | 8 | 4 / 8 | |
| Data / analytics AI | 10 | 10 / 10 | |
| Research / search AI | 8 | 8 / 8 | |
| Design AI | 8 | 6 / 8 |
MCP server coverage
The MCP Gateway recognises any MCP server configured in Claude Desktop, Cursor, or Claude Code by fingerprinting the binary and the package reference. Ten well-known servers ship with recommended per-tool policies on day one. Anything else your team has wired up shows up by binary name and your admins set the policy.
| MCP server | Risk | Recommended posture |
|---|---|---|
| Filesystem (Anthropic) | High | Block writes by default. Read tools allowed. |
| GitHub (Anthropic) | High | Block delete and merge. Warn on file changes and PRs. |
| Slack (Anthropic) | High | Warn on posts and replies. Allow lookups. |
| Postgres (Anthropic) | High | Warn on every query. Allow schema introspection. |
| Puppeteer (Anthropic) | High | Block evaluate. Warn on navigate and screenshot. |
| Google Drive (Anthropic) | High | Warn on search and read. Allow listings. |
| Stripe (official) | High | Block refunds and subscription cancels. Warn on payment links. |
| Brave Search (Anthropic) | Sanctioned | Allow all. Read-only public search. |
| Memory (Anthropic) | Sanctioned | Allow all. Local-only key-value store. |
| Sequential Thinking (Anthropic) | Sanctioned | Allow all. Pure compute, no IO. |
Recommended postures are starting points, not prescriptions. Admins can override any rule per workspace from the dashboard. The catalogue refreshes quarterly along with the rest of the tool catalogue.
What we don't cover yet
Transparency matters for catalogue claims, so here's what's still missing as of this audit:
- Security AI long tail. Splunk AI Assistant, Fortinet AI features, Vectra AI, Sophos Intelix.
- Customer-self-hosted LLM gateways. LiteLLM, Kong AI Gateway. These run at customer-defined hostnames; we plan to detect them via outbound-traffic heuristics on the desktop sentinel rather than catalogue entries.
- In-app Microsoft 365 / Google Workspace Copilot panels. The dedicated Copilot hosts (m365.cloud.microsoft, gemini.google.com) are catalogued, but AI panels embedded inside Outlook / Word / Teams / Docs / Gmail share URLs with non-AI usage. Detection requires DOM-scoped browser-extension signals, which is a separate engineering track.
- Healthcare scribe long tail. Atropos Health and a few smaller vendors below the top-10 by market share.
- Design AI long tail. Galileo AI.
- The MCP long tail. Beyond the 10 catalogued servers, custom and internal MCP servers show up by binary name without a recommended policy. Admins set the policy. We add a curated entry when the server crosses adoption thresholds.
How we maintain coverage
The catalogue is refreshed against the same six source families on a rolling cadence: annual reports (Menlo, Stack Overflow, Zylo, McKinsey), per-category analyst reports (Gartner, Forrester) on publication, and weekly checks against GitHub Trending and AI-funding announcements for emerging tools. New entries pass a relevance gate (≥2-source corroboration within 60 days, or ≥1 analyst-tier source) before they're added.
Source health is monitored: if any source produces zero entries for 14 days, or if the pipeline runs for 7 days without net new entries, an alert fires. The full refresh methodology is published here.
Service-level commitments
- ≥90% top-100 coverage within 30 days of public availability of a new tool. Currently at 92%.
- ≥80% per-category coverage for chat, coding, image, video, voice, agents, productivity, sales, legal, healthcare, security, data, research, and design.
- <5% matcher quality issues. Entries flagged stale, too narrow, or pointing at marketing pages instead of real product surfaces.
- Quarterly scorecard publication. This page, refreshed each quarter against the latest source data.
Privacy Terms Contact What's new © Northbeams 2026