In 2026, "AI-native" went from marketing copy to a measurable shipping requirement: does your product expose its data + actions to AI assistants via MCP, or does it require humans to context-switch to a web dashboard?
Arenza shipped its MCP server in the same sprint as the dashboard — not as a follow-up. This article explains why and how.
Why MCP-first (not MCP-later)
The agency principal — our primary buyer — opens Claude Desktop or Cursor 30+ times a day. They open Arenza's dashboard 1-2 times a day. If the only way to act on Arenza data is the dashboard, we lose 28+ daily interaction touchpoints to "I'll deal with that later when I'm at the dashboard". MCP collapses those 28+ moments into "Claude, what do my client brands look like?"
Building MCP later (post-dashboard) is the wrong order: by the time we'd have shipped MCP, the agency would have already either (a) given up because the dashboard was friction, or (b) built their own API integration. Both are losses.
What the Arenza MCP server exposes
Read tools (always available)
list_brands()— caller's portfolio (id, slug, name, domain, region)get_brand_overview({slug})— AVS 7d avg, mention count, regionlist_prompts({brand})— tracked buyer questions for a brandlist_opportunities({brand, kind?})— discussions + articles to action
Write tools (Stage 4 P0)
add_competitor({brand, name, domain?})— track new competitordismiss_competitor({brand, competitor_id})— hide from comparisonmark_opportunity_done({brand, kind, id})— clear from worklistsuggest_competitors({brand})— LLM suggests 5-8 candidatessuggest_prompts({brand, count?, persona?})— LLM suggests buyer promptsgenerate_geo_article({topic, audience, ...})— draft a GEO article
Architecture overview
The MCP server is a separate package (@arenza/mcp-server) deployed to Cloud Run at mcp.arenza.ai. It does NOT have direct DB access — every tool proxies to the existing backend REST routes. Why:
- Single source of truth for auth + tenant isolation. Backend already enforces
requireBrandAccessand tenant boundaries. MCP just authenticates the caller (via x-api-key today, OAuth-DCR in P2) and forwards the request with X-Arenza-User-Id; backend treats it identically to a portal request. - Single cost-budget enforcer for LLM calls. Tools like
suggest_promptsinvoke the LLM. Routing through backend means one LRU cache, one cost telemetry surface, no duplicate LLM client implementations. - Audit log unification. Every action — whether from MCP, portal, or REST API — lands in the same backend access log. Compliance + debugging stay simple.
Transport: HTTP-over-JSON-RPC, not stdio
MCP supports two transports: stdio (Claude Desktop spawns a local process) and HTTP. Arenza ships HTTP-only. Why:
- Zero install for the agency. Stdio requires shipping a binary or npm package the user installs locally. HTTP just needs the URL. Pasting
https://mcp.arenza.ai/rpcinto Claude's config is 5 seconds; installing a package is 5 minutes + breaks when their npm cache is weird. - Centralized observability. Every tool call hits our infrastructure → we see latency, error rates, who calls what, when. Stdio puts everything on the user's machine; we'd be debugging blind.
- Future write-tools need rate limits. Stdio runs at user CPU speed; HTTP lets us cap per-user QPS at the edge.
Auth: API key today, OAuth + DCR next
Currently x-api-key header maps to a Clerk userId via env-injected single mapping (dev convenience). P2 wires Clerk OAuth + Dynamic Client Registration so any MCP client can do the standard OAuth dance and get a per-user token. That eliminates the API-key-management UX entirely.
Setup walkthrough
For Claude Desktop / Cursor / Claude Code setup steps, see /guides/use-claude-with-arenza-mcp-server. One config block, restart, you're done.
For other SaaS builders thinking about MCP
Build the MCP server in the same sprint as the dashboard, not as a Q3 add-on. Your buyer is already living in Claude / Cursor; the question is whether they reach for your dashboard or just leave their LLM tab and never come back. MCP is the bridge.