Back to blog list

What is MCP Analytics? A Practical Guide for SaaS Product Teams

Ship native customer-facing dashboards and self-serve reporting fast with Embeddable

Learn more

Contents

Delight users with lightning-fast, fully native embedded dashboards.

Learn more

MCP analytics is the practice of exposing analytics data, metrics, and dashboards to AI agents through the Model Context Protocol, so users can query that data conversationally from any MCP-compatible AI assistant. For SaaS product teams, the highest-value version isn't internal: it's exposing your product's analytics to your customers' AI agents, so the data you collect on their behalf becomes addressable from their own AI workflows.

TL;DR

  • MCP (Model Context Protocol) is an open standard from Anthropic, now broadly adopted across AI agents, for letting AI assistants call tools and access data sources.
  • MCP analytics = exposing analytics data (metrics, dashboards, queries) to AI agents via MCP servers, so the data can be queried in natural language from any MCP-compatible client.
  • Two flavours: internal (your team's AI assistant queries your warehouse) and customer-facing (your SaaS customers' AI agents query the data your product collects about their business).
  • For SaaS product teams, customer-facing is the strategic one. It makes your product part of your customers' AI workflows, not a destination they have to context-switch into.
  • Building blocks: a governed semantic / metric layer, an MCP server, multi-tenant isolation, and authentication that respects your product's existing permission model.

What is MCP, in one paragraph

The Model Context Protocol is an open standard, originated by Anthropic and adopted across the AI ecosystem, for letting AI agents call external tools and read external data. An MCP server exposes a set of capabilities (read this database, run this query, fetch this file, write this record) over a well-defined interface. Any MCP-compatible client (Claude, Cursor, an internal agent your team built, a customer-built agent) can connect to that server and use those capabilities to answer questions or take actions. Think of it as USB for AI agents: a single connector that lets one device (the agent) talk to many peripherals (your tools and data) without custom plumbing for each pairing.

What "MCP analytics" actually means

Two distinct things often get bundled under the same label. Worth separating them up front.

Internal MCP analytics

Your team uses an AI assistant (Claude Desktop, Cursor, an internal agent) and you want it to be able to query your data. You stand up an MCP server in front of your warehouse, your dbt project, your BI tool, or your semantic layer. The assistant can now answer questions like "what was MRR last month broken down by region" without anyone manually exporting a CSV.

This is real and useful. Several vendors (Cube, Hex, Metabase, others) ship MCP servers aimed squarely at this internal use case. If you're a data team trying to make your existing analytics stack queryable by AI, this is the obvious starting point.

Customer-facing MCP analytics

This is the version that matters most for SaaS product teams. You expose the data your product collects about your customers' businesses via an MCP server, scoped to each customer's tenant. Your customer's AI agent (whatever they're using, wherever they're using it) can now query their own data inside your product as easily as it queries their email, their CRM, or their warehouse.

Why it matters: AI agents are increasingly the interface buyers use to orchestrate their work across SaaS tools. If your product's data isn't reachable from those agents, it gets routed around. If it is, your product becomes part of the customer's AI workflow rather than a destination they have to switch into.

The customer-facing version is also harder. Multi-tenant data isolation, per-customer authentication, governed metric definitions so the AI doesn't hallucinate aggregates, rate limits and cost control. None of these are insurmountable, but they're the difference between "we shipped an internal AI experiment in a sprint" and "we shipped a production capability our customers depend on."

Why this matters now

A few signals point at the same conclusion.

MCP adoption is broad and accelerating. A year after launch, MCP servers exist for most major SaaS tools (Slack, Linear, GitHub, GitLab, Google Drive, Notion, dozens more) and most major AI agents support it. The standard has crossed the threshold from "interesting" to "default expectation".

Embedded analytics is moving from "dashboards in our app" to "data your customers' systems can read". The shift parallels what happened to APIs in the 2010s: from "we have a UI" to "we have a UI and an API". Now: from "we have a UI" to "we have a UI and an MCP server". Customers will increasingly expect both.

AI-native buyers ask AI-native questions. A customer using Claude or another assistant to manage their workflow doesn't want to log into eight portals to gather context. They want their agent to gather context for them. If your product's analytics aren't agent-addressable, your customers' AI workflows route around you.

What you need to build MCP analytics into your SaaS product

The components are well understood. The integration work is real but bounded.

1. A governed metric / semantic layer

Without this, AI agents will hallucinate aggregates. Ask an LLM "what was our revenue last quarter" against raw tables and it will sometimes write SQL that sums the wrong column, ignores returns, or double-counts duplicates. A semantic layer (Cube, dbt Semantic Layer, LookML, or equivalent) lets you define metrics once, with the joins and filters that make them correct, and have the AI query those metrics rather than improvise SQL.

This is the single biggest determinant of MCP analytics quality. If your metrics aren't defined, the AI will define them for you, badly. Get this right first.

2. An MCP server in front of the semantic layer

The server exposes the semantic layer's metrics and dimensions as MCP tools. AI agents discover what they can query (list_metrics, list_dimensions, run_query) and the server enforces "you can only query things defined in the semantic layer". Cube ships one, Embeddable's semantic layer is Cube-based so the same building blocks apply, and there are open-source MCP-server implementations for most other semantic-layer tools.

3. Multi-tenant data isolation

Critical for customer-facing MCP analytics. Each MCP connection must be scoped to one customer's data. Usually implemented as row-level security in the underlying database plus per-tenant authentication on the MCP server. If your existing embedded-analytics setup already handles tenant isolation correctly (most do), you can extend the same model to MCP without rebuilding it.

4. Authentication that respects your existing permission model

If your product has roles (admin, member, viewer) and your dashboards respect them, your MCP server should too. The AI agent should see the data the user it's acting on behalf of can see, not the union of all users in the account. Typically achieved by passing a per-user JWT or API token to the MCP server and using it to scope queries.

5. Rate limiting and cost control

LLMs can loop. A misconfigured agent can hammer your MCP server with hundreds of queries in seconds. Rate limits per connection, query timeouts, and a simple "AI-agent-friendly" query optimizer in front of your warehouse will save you embarrassment and bills.

Common pitfalls

A few things go wrong predictably.

Skipping the semantic layer. Tempting because "let the AI write SQL" feels powerful in a demo. Painful in production because metrics drift, joins are wrong, and customers see different answers to the same question. The fix is always to add a semantic layer in retrospect. Better to start with one.

Building one MCP server per database, not per product capability. Customers don't want fifteen MCP servers for the fifteen apps in their stack; they want one per app, with that app's capabilities expressed clearly. Your MCP server should expose your product's metrics and dashboards, not "everything in the underlying warehouse". Curation is a feature, not a limitation.

Underestimating the access-control work. Multi-tenant security gets harder when an AI agent is the caller. Agents may make queries the original user never explicitly asked for. Logging which user-on-behalf-of-which-tenant initiated which agent-driven query is necessary for audit, debugging, and your customers' compliance teams.

Not measuring AI-agent usage separately from human usage. Once MCP traffic ramps, you want to see it in your analytics. Tag queries by origin (UI / API / MCP) so you can answer "how much of our query volume is AI-driven now" and price accordingly.

Where this is heading

Two near-term trends to plan for.

MCP becomes table-stakes for B2B SaaS in 2026-2027. The same way "do you have a public API" became a procurement checkbox item between 2012 and 2018, "do you have an MCP server" is becoming one between 2025 and 2027. The earliest movers won't win because they had MCP first; they'll win because they had time to get the governance, security, and discoverability right.

Embedded-analytics vendors converge on MCP-first architecture. Embedded analytics tools that already have a strong semantic-layer story (Cube and Cube-based tools like Embeddable, Looker, ThoughtSpot, Hex) have a structural head start because the semantic layer is the natural place to expose MCP from. Tools without one will either build it or buy it. The "drop in our SDK and you have charts" generation of embedded analytics is being supplemented by a "drop in our MCP server and your customers' AI agents see your data" generation.

Vendor options

If you're evaluating tools that help you ship MCP analytics, a few categories matter.

  • Semantic-layer-first analytics platforms with active MCP support. Cube is the clearest example: open-source semantic layer with an official Cube MCP server, multi-tenant support, large community. Embeddable is built on Cube.js and is developer-first code-owned, with AI features including an AI Model Builder, an AI skill for dashboards-as-code via Claude Code, and an AI analytics chat for end users. Looker brings LookML and a deep semantic layer, paired with Gemini-powered conversational interfaces.
  • Embedded BI tools adding MCP integration. Metabase Embedded (open-source heritage, broad ecosystem reach), ThoughtSpot Embedded (search-first AI BI, SpotterCode agentic positioning), Sisense (Compose SDK and AI Assistant). Maturity of explicit MCP support varies by vendor; check current docs and changelogs rather than relying on marketing pages.
  • Pure MCP-server frameworks. Useful if you want to build the MCP layer yourself in front of an existing warehouse and semantic stack. Several open-source implementations exist; the Anthropic MCP spec page lists current servers.

A practical heuristic: if your product already has a semantic layer (or could get one cheaply), the MCP add is incremental. If it doesn't, the semantic-layer build is the larger of the two projects.

For an honest picture of how these vendors compare on more than just MCP, see embeddable.com/customer-stories for customer outcomes, and individual vendor docs for current capability detail.

FAQ

Is MCP analytics the same as agentic analytics?

Related but not identical. Agentic analytics is the broader idea that AI agents (not just chatbots) increasingly read, write, and reason about analytics data on behalf of users. MCP analytics is one specific way to enable that: the protocol the agents use to call your analytics. You can have agentic analytics without MCP (proprietary integrations, custom function-calling) but MCP is the open standard everyone is converging on. See our agentic analytics primer for the broader pattern.

Do I need MCP if I already have an AI chat experience in my product?

Maybe not for the same use case. An in-product AI chat is one app, one server. MCP makes your data addressable from other apps' agents. If you only ever expect customers to use AI inside your product, in-product chat is enough. If you expect them to use AI elsewhere and want your product included, MCP is the answer.

How is MCP different from a REST API?

A REST API is a contract for application code to call you. MCP is a contract for AI agents to call you. The shape is different: MCP exposes discoverable tools with semantic descriptions so an LLM can decide which to use; REST exposes endpoints designed for developers to read documentation and code against. Most products will end up with both, serving different consumers.

Is MCP analytics a security risk?

No more than any other API surface, when done correctly. The same multi-tenant isolation, authentication, rate limiting, and audit logging that protect your product's other interfaces should apply to the MCP one. The novel risk is AI agents making more queries than humans would, sometimes recursively; rate limits and query timeouts mitigate this.

How do I price MCP-driven usage?

Three common models: pass-through (customer's AI tokens, your normal subscription pricing), AI-query meter (charge per MCP call beyond a threshold), or value-based (charge for AI-augmented tiers of your product). The right answer depends on how AI-driven usage correlates with the value your product provides. Expect this to evolve over the next 12-24 months as patterns settle.

Does Embeddable support MCP analytics today?

Embeddable is built on Cube.js, which has active MCP integration in its ecosystem, so the semantic-layer substrate is already in place. Embeddable's AI features include an AI Model Builder (generates Cube.js data models from SQL), an AI skill for dashboards-as-code via Claude Code, and an AI analytics chat for end users.


This piece is part of Embeddable's series on emerging AI/agentic analytics categories. See also: What is agentic analytics? and AI BI tools for SaaS products: product analytics or embedded analytics?.

Last updated: 12 May 2026.