Back to Blog

Google's gRPC Transport for MCP: What It Signals About the Database API Stack

Google is pushing gRPC as a first-class MCP transport alongside stdio and HTTP/SSE. The interesting story isn't the protocol buffers — it's what 'pluggable transports' means for the data layer that sits behind every MCP server.

Last week, Google Cloud’s networking team published a post making the case for gRPC as a native transport for the Model Context Protocol. It’s not a surprise announcement — Google engineers had filed the formal proposal back in February, and MCP maintainers had already agreed to make transports pluggable in the SDK. What changed in April is that Google is now contributing and distributing the transport package itself.

If you only read the headline, this looks like a niche performance story: replace JSON-RPC with protocol buffers, swap HTTP for HTTP/2, get bidirectional streaming, move on. That framing undersells it. The interesting thing isn’t gRPC — it’s that MCP is becoming a protocol with a transport layer abstraction, and that has direct implications for anyone building the data plumbing that sits behind MCP servers.

This post is about that second-order effect. If you’re shipping MCP servers in front of a database — or thinking about it — the pluggable transport conversation will hit your roadmap inside the next two quarters.

What Google Actually Proposed

The original MCP spec defined two transports: stdio (great for local CLI agents) and HTTP with Server-Sent Events (great for browsers, awkward for everything else). That covered the early days when most MCP servers were either a Python script run by Claude Desktop or a hosted endpoint behind a web app.

Then enterprises started adopting MCP. AWS, Cloudflare, Microsoft, and Oracle all shipped production servers in Q1. And enterprises tend to have opinions about transport. Specifically, they have opinions about gRPC, because gRPC is what their service mesh, their observability stack, their auth layer, and their internal microservices already speak.

Google’s proposal — submitted in February, formalized in April — adds three things that text-based JSON-RPC over HTTP doesn’t give you:

  1. Protocol buffers instead of JSON. Binary payloads, smaller on the wire, schema-validated at compile time. For agents that round-trip large tool results (think: a query that returns 5,000 rows of customer data), this matters.
  2. HTTP/2 instead of HTTP/1.1. Multiplexed streams over a single persistent connection. The current MCP HTTP transport opens a new request for every tool call, then uses SSE to stream results back. With HTTP/2, you get bidirectional streaming on one connection, and you skip the SSE workaround entirely.
  3. First-class type safety. Tool definitions become .proto files. Your client knows the schema before it makes the call. Drift between the server’s tool schema and the client’s expectations becomes a compile-time error instead of a runtime exception.

For agents running thousands of tool calls per minute against an internal service mesh, the case writes itself. For a CLI tool that fires a tool call every few seconds, it doesn’t matter at all. Google was honest about this — their post explicitly says that for smaller operations, the difference might be negligible.

But the gRPC transport itself isn’t the load-bearing part of the announcement. The load-bearing part is what it implies about MCP as a protocol.

”Pluggable Transports” Is the Real News

When the MCP maintainers agreed to support pluggable transports, MCP stopped being “a protocol” and started being “a protocol family.” That’s a meaningful change.

Concretely: in a year, you’re going to have MCP servers that speak stdio for local development, HTTP/SSE for web clients, and gRPC for production east-west traffic. Your client picks the transport based on where it’s running. Your server has to support all of them — or pick one and accept that some clients can’t reach you.

This pattern isn’t new. SQL has wire protocols (PostgreSQL’s, MySQL’s, the proprietary ones) and most ORMs abstract them. HTTP frameworks support HTTP/1.1, HTTP/2, and HTTP/3 transparently. gRPC itself supports multiple wire formats. The MCP ecosystem is following the same evolution: the abstraction surface widens, the implementation surface fragments, and the people who suffer most are the ones building the integration layer underneath.

That last group is where this gets interesting for anyone working on database-to-MCP plumbing.

The Data Plane Is Already Doing This Translation

Here’s the part that doesn’t get enough attention. Most MCP servers in production today are not standalone protocols talking to a model. They’re a thin shim in front of a real backend system — a database, an internal API, a SaaS platform. The MCP layer is the wire format the agent sees. What it actually wraps is an existing query engine, REST endpoint, or RPC service.

If you’re running, say, a PostgreSQL MCP server, the actual flow looks like this:

agent  →  MCP transport (stdio/HTTP/gRPC)
       →  MCP server (parse, auth, route)
       →  database driver (pgx, sqlx, etc.)
       →  PostgreSQL wire protocol
       →  database

The MCP server in the middle is doing transport translation, schema translation, and access control. When the protocol it accepts on the front end gains a new transport (gRPC), nothing about the back-end translation changes. The database doesn’t care. The driver doesn’t care. The SQL doesn’t care.

This is the architectural argument for separating your data API layer from your MCP server. If those two are the same binary, every transport change in MCP forces a redeploy of the thing that talks to your database. If they’re separate, the data API is stable and the MCP layer is a thin wrapper that you can swap, upgrade, or run multiple versions of in parallel.

This is how Faucet is built, and it’s not because we predicted gRPC. It’s because the same logic applied to REST a year ago, and it’ll apply to whatever protocol shows up next.

What This Looks Like in Practice with Faucet

Faucet’s job is to turn a database into a typed REST and MCP API. The MCP server it ships is a transport over the same internal query engine that powers REST. When a new MCP transport ships, the query engine doesn’t change — only the wrapper does.

A concrete example. Here’s the standard Faucet MCP server today, talking JSON-RPC over stdio:

faucet mcp serve --transport stdio

When the gRPC transport lands in the MCP Go SDK (the official SDK that Faucet uses), adding it is a flag change:

faucet mcp serve --transport grpc --addr :50051

Same database. Same RBAC rules. Same query engine. Same tool definitions auto-generated from your schema. The transport is a swap, not a rewrite, because the data plane sits underneath the protocol layer instead of being tangled into it.

The same separation works the other way. Faucet’s REST API is the same query engine wearing a different hat:

faucet serve --port 8080  # REST + OpenAPI
curl http://localhost:8080/api/v2/postgres/_table/customers?limit=10

Both endpoints — /api/v2/... for REST and the MCP tool layer for agents — read from the same internal table representation, enforce the same role-based filters, and respect the same per-table tool configuration. The protocol is a serialization concern. The data and the policy are the load-bearing parts.

This is the boring lesson that the gRPC announcement quietly reinforces: if your MCP server is doing anything more than the bare minimum protocol translation, it’s coupled to a transport that’s about to become one of several. Decouple it now.

What Pluggable Transports Means for Auth

There’s a security thread here that’s worth pulling on.

The MCP authorization spec — the OAuth 2.1 flow that Anthropic published last month — was designed against the HTTP transport. It assumes there’s an HTTP server to host the authorization endpoint, an HTTP redirect to receive the callback, and HTTP headers to carry the resulting access token.

That all works fine for HTTP/SSE. It also works for HTTP/2-based gRPC, because the auth happens at the HTTP layer underneath gRPC. But it does not work cleanly for stdio, which is why the spec quietly punts on local-mode auth and leaves it to the client to handle out-of-band.

If you’re building an MCP server that needs to serve agents over multiple transports — a perfectly reasonable thing to do for a hosted service — your auth story now has at least three flavors:

  • stdio: trust the local user, optionally read credentials from a file or env var
  • HTTP/SSE: full OAuth 2.1 flow with browser redirect
  • gRPC: TLS client certificates or mTLS, with OAuth tokens carried in metadata headers

The pragmatic answer is to not let the MCP layer own auth. Push it down to the data API layer, where it’s already a solved problem (RBAC against an identity provider, per-table or per-row policies, audit logging). The MCP server then becomes a transport adapter that forwards an authenticated identity downstream. This is, again, the same answer as before: keep the data plane stable and let the protocol layer be the thing that changes.

What to Watch in Q2

A few things to track as the gRPC transport moves from proposal to shipped code:

  1. The Go and Python MCP SDK 1.3 releases. Both SDKs need to land the pluggable transport interface before any third-party transport (gRPC included) can register itself. The Go SDK is closer; expect this in the next month.

  2. Whether Anthropic, OpenAI, and Microsoft adopt gRPC client support. A transport that only one vendor’s client speaks is a fragmentation event, not a standard. Google has the political weight to push this, but enterprise MCP adoption depends on the major model vendors agreeing.

  3. Whether the MCP gateway pattern picks up gRPC support first. Cloudflare, AWS, and a handful of startups are shipping MCP gateways — proxies that aggregate many backend MCP servers behind a single client-facing endpoint. Gateways will probably support gRPC before individual servers do, because the performance gain compounds when you’re aggregating thousands of tool calls.

  4. What happens to the Streamable HTTP spec. MCP added a “Streamable HTTP” transport last year as a middle ground between SSE and full bidirectional streaming. If gRPC lands cleanly, Streamable HTTP becomes the awkward middle child — useful for browsers, redundant for backends.

The thing not to watch: vendors who claim gRPC support and then ship a wrapper around HTTP/SSE that pretends to be gRPC. There will be at least one of these.

The Pattern Underneath

Every six months for the last two years, the MCP ecosystem has surfaced a “this changes everything” announcement. OAuth support. Tool annotations. Streamable HTTP. The A2A coordination layer. Now gRPC transport. Each one is real, each one solves a real problem, and each one shifts the ground under teams that built tightly-coupled MCP integrations.

The defensive move is the same every time. Treat MCP as one protocol your data needs to speak, not as the architectural center of your stack. Keep your access control, your query engine, and your schema definitions in a layer that doesn’t know what protocol the request came in on. Let the protocol layer be a thin, swappable wrapper.

When the next “this changes everything” announcement lands — and it will, probably in July — the teams that did this won’t notice. The teams that didn’t will spend a week refactoring.

Getting Started

Faucet generates a typed REST API and MCP server from any SQL database in one command. RBAC, OpenAPI 3.1, and MCP tools are wired up automatically from your schema, so the protocol layer stays thin and the data plane stays stable.

curl -fsSL https://get.faucet.dev | sh

Point it at a database:

faucet connect postgres "postgres://user:pass@localhost/mydb"
faucet serve

You get REST endpoints at http://localhost:8080/api/v2/postgres/_table/... and an MCP server you can register with any compatible client:

faucet mcp serve --transport stdio

When the gRPC transport ships in the official MCP SDK, switching is a flag. The query engine, RBAC rules, and tool definitions don’t change.

Source is on GitHub at github.com/faucetdb/faucet. Questions, issues, and bug reports welcome.