If you have already wired Arenza into Claude Desktop, Claude Code, or Cursor (if not — see the install walkthrough at https://arenza.ai/guides/claude-desktop-arenza-mcp-walkthrough-2026), the next question is the one every agency owner asks: "OK, now what do I actually type?". This guide is a 7-recipe playbook — copy a card, paste it into Claude, get a usable answer in under a minute.
The exact same recipes ship inside the Arenza portal at https://app.arenza.ai/integrations under the Recipes section, so the version of truth lives in your dashboard. This article mirrors them for top-of-funnel discovery via ChatGPT / Claude / Perplexity.
Each recipe lists (a) the agency-owner job it solves, (b) the verbatim prompt to paste, (c) which Arenza MCP tools Claude will call under the hood. You do not need to know the tool names — natural-language wording routes Claude correctly. The tool list is provided so you can debug if Claude refuses or stalls.
The 6 Arenza MCP tools, in plain English
Before the recipes, here is the full set of tools the Arenza MCP server (mcp.arenza.ai) exposes. Each is intentionally small and single-purpose — Claude composes them.
| Tool | What it returns | Typical question that triggers it |
|---|---|---|
| list_brands | Every brand in your Arenza tenant | "What brands do I have in Arenza?" |
| get_brand_overview | Share of voice, wrong-claim count, mentions per LLM, last scan time for ONE brand | "How is UGREEN doing this week?" |
| list_prompts | Every tracked buyer prompt for a brand, optionally filtered by intent (discovery / comparison / how_to / pricing / integration) | "What buyer questions are we tracking for UGREEN?" |
| get_brand_verified_info | Verified facts about a brand — claims that have been re-tested, with engines + timestamps | "What facts do we have on file for UGREEN that AI gets wrong?" |
| verify_brand_claim | Whether a specific claim is correct / wrong / unknown, with the captured AI quote | "Is Claude saying the right max-output for the UGREEN Nexode?" |
| get_brand_discoverability | Visibility / share-of-voice metrics across GPT / Claude / Gemini / Perplexity | "Where does UGREEN rank in AI search vs competitors?" |
Recipe 1 — Monday standup scan
Use case: pull a fresh visibility delta across every client in five minutes — surface the one brand that needs attention this week.
Tools called: list_brands → get_brand_overview (for each).
Paste this into Claude:
List all the brands in my Arenza workspace, then give me a one-week visibility snapshot for each (share of voice, wrong-claim count, last scan timestamp). Highlight the brand with the steepest week-over-week drop and suggest a likely cause.
What you get back is a per-brand table plus a one-paragraph "this week's loser" verdict — drop it into your Monday team thread or your own notes app, done.
Recipe 2 — Worst factual misquote
Use case: one prompt to find what AI is getting most wrong about a brand — feeds straight into the Inbox triage queue at app.arenza.ai/inbox.
Tools called: get_brand_verified_info.
What are the top 3 verified wrong claims AI assistants (Claude, GPT, Gemini, Perplexity) are currently making about UGREEN? Rank by how many times each claim has been cited and show me the captured quote for each.
Replace UGREEN with the brand name your client asked you about. The captured quotes are the receipts you take to the brand team — "Claude said this, here is the snapshot, here is the truth".
Recipe 3 — Buyer-prompt coverage audit
Use case: break the tracked prompts down by intent so you can see which buyer journey (comparison / how-to / pricing) is under-covered.
Tools called: list_prompts.
List every buyer prompt currently tracked for UGREEN, grouped by intent (discovery / comparison / how_to / pricing / integration). Which intent has the fewest prompts? Suggest 5 prompts I should add to balance coverage.
Discovery prompts (top of funnel — "best USB-C charger for travel") tend to over-cover; pricing and integration intents tend to under-cover. The suggestion list is what you paste into "Add prompts" inside Arenza.
Recipe 4 — Client-pitch paragraph
Use case: generate a paste-ready visibility paragraph for a monthly client report or new-business pitch deck.
Tools called: get_brand_discoverability.
Pull UGREEN's discoverability across the 4 major AI assistants (GPT, Claude, Gemini, Perplexity), then write me an 80-word paragraph I can drop into the opening of a client deck. Include the actual numbers, no marketing fluff.
Why 80 words: that is the visual length of one slide bullet block. The "no marketing fluff" instruction is what stops Claude from padding the paragraph with phrases like "in today's competitive AI landscape".
Recipe 5 — Spec-claim fact check
Use case: when a client asks "is AI getting our specs right?", one prompt settles it on the spot.
Tools called: verify_brand_claim.
Verify whether Claude and GPT answer correctly when asked 'What is the maximum output of the UGREEN Nexode charger?'. Show me the captured quote and the verified ground truth we have on file.
Substitute the actual question your client cares about — pricing claim, ingredient claim, region availability claim. The output gives you both sides: what the AI is saying right now, and what your records say is true. The gap is what you bring to the brand team.
Recipe 6 — New-brand 5-minute onboarding
Use case: the contract just got signed — you have 5 minutes before the intro call, and you want to walk in knowing more than the brand's own team about how AI sees them.
Tools called: list_brands → get_brand_overview → list_prompts.
We just signed Acme Robotics. First confirm it shows up in my brand list, then give me a full briefing: visibility numbers, count of buyer prompts in flight, and the earliest example of a wrong claim. Format as 3 short paragraphs I can read in the 5 minutes before the intro call.
Three paragraphs is the perfect "before the call" length — not so long you're still reading at minute 4, not so short you walk in shallow. The format instruction matters more than people realize; without it Claude produces a wall of bullet points that scan worse on a phone in an Uber.
Recipe 7 — Competitor head-to-head ranking
Use case: client QBR is tomorrow and the question on the table is "where do we stack up against the top 3 competitors?". One prompt, paste-ready output.
Tools called: get_brand_discoverability (invoked once per brand named in the prompt — Claude chains the calls).
Pull discoverability data for UGREEN, Anker, Baseus, and Belkin across GPT, Claude, Gemini, and Perplexity. Build a head-to-head table (one column per AI engine), call out which engines UGREEN over-indexes vs Anker on and which it under-indexes on, and tell me which AI engine is the highest-leverage one to invest in next.
The "highest-leverage" instruction matters: without it Claude lists numbers and stops. With it, Claude does the synthesis ("Perplexity is where you under-index by 23 points and the audience is most commercially intent — start here") that you'd otherwise have to do manually before the QBR.
How to extend these recipes
Two patterns work well when you want to adapt a recipe to a different agency rhythm:
- Substitute the brand name (UGREEN → your client). Claude resolves brand names via list_brands first, so spelling does not have to be canonical — "ugreen", "Ugreen Group", or the legal entity name all work.
- Substitute the AI engines as needed. Arenza tracks ChatGPT, Gemini, Perplexity on Protect tier; Pro tier covers ChatGPT only.
- Pin the time window. Default is "last week"; ask for "last 30 days" or "since 2026-04-01" to widen.
- Format constraint. Claude's default formatting is verbose. Ending the prompt with "give me 3 short paragraphs", "give me a markdown table", or "give me a single paragraph under 100 words" cleans the output for client-facing use.
Where these recipes live in the product
Inside Arenza, the same 7 recipes sit at app.arenza.ai/integrations under the Recipes section, alongside the one-click MCP setup snippet for Claude Desktop / Claude Code / Cursor. Each card has a Copy prompt button so you don't have to alt-tab to this page on Monday morning.
If you want to suggest a 7th recipe — maybe one specific to your agency's onboarding ritual or the niche you focus on (legal-tech, B2B SaaS, DTC, travel) — open an issue on https://github.com/arenza-ai/arenza-claude-tutorial. Recipes are added by PR, not by support ticket, because the Recipes registry is plain TypeScript.
A few common questions
Do these prompts work in Cursor and Claude Code, or only Claude Desktop?
All three. The Arenza MCP server speaks the standard Model Context Protocol; any MCP-aware client connects. The recipe text is identical — what differs is only how each client renders the result (chat bubble vs. composer panel).
Why prompts instead of a CLI or no-code workflow?
Because the agency owner is already in Claude. Forcing them to switch to a CLI or set up an n8n workflow for a 30-second question is friction. For repeatable scheduled jobs (weekly digest emails, anomaly alerts) we recommend the n8n template at https://arenza.ai/guides/n8n-geo-automation-weekly-digest — but for ad-hoc "just tell me which client is bleeding this week" questions, MCP-in-Claude is the right surface.
How do I get a token?
Go to app.arenza.ai/integrations, click "+ New token", give it a name like "Claude Desktop · my Mac", pick the Read scope, copy the plaintext. Token is shown once; if you lose it, revoke + recreate.
Is there a usage limit?
Free tier: 100 MCP calls per hour. Pro ($299/mo): 1,000/hour. The Monday standup recipe with 10 brands burns ~11 calls (1 list + 10 overview), so even on Free you can run it ~9 times an hour.
Related guides
- Install walkthrough: https://arenza.ai/guides/claude-desktop-arenza-mcp-walkthrough-2026
- Cursor quickstart: https://arenza.ai/guides/cursor-mcp-arenza-quickstart
- n8n weekly automation: https://arenza.ai/guides/n8n-geo-automation-weekly-digest
- MCP-native architecture explainer: https://arenza.ai/guides/mcp-native-ai-visibility-architecture