Back to Blog

Every Database Vendor Now Ships an MCP Server. That's the Problem.

Google Cloud, Snowflake, and Oracle have all launched managed MCP servers for their databases. Anthropic launched Claude Managed Agents. OpenAI is sunsetting its Assistants API for MCP. The protocol won — but each vendor's MCP server only talks to their database, creating a new layer of lock-in that multi-cloud teams can't afford.

In the past 90 days, something remarkable happened to the Model Context Protocol. It stopped being a protocol and became a battleground.

Google Cloud shipped managed MCP servers for AlloyDB, Spanner, Cloud SQL, Firestore, and Bigtable. Snowflake launched managed MCP servers within their governance boundary. Oracle built a native MCP server into Autonomous AI Database. Anthropic released Claude Managed Agents at $0.08 per session hour. OpenAI announced the Assistants API sunset on August 26, 2026, with MCP as the migration path in its Responses API.

Every major player in the data infrastructure stack has converged on MCP as the protocol for agent-to-database communication. The MCP ecosystem now has over 10,000 active servers, 97 million monthly SDK downloads, and first-class client support in Claude, ChatGPT, Cursor, Gemini, and VS Code. The Agentic AI Foundation under the Linux Foundation governs the spec. The protocol won.

But here’s the thing nobody’s saying out loud: every vendor shipped an MCP server that only talks to their database. And if your infrastructure spans more than one vendor — which, according to Flexera’s 2026 State of the Cloud report, applies to 89% of enterprises — you’re now managing a different MCP server for every database platform in your stack.

We solved the protocol fragmentation problem. We created a vendor fragmentation problem.

What the Managed MCP Landscape Actually Looks Like

Let’s walk through what each vendor shipped and what it means for your agent architecture.

Google Cloud: Five Databases, Five MCP Servers

Google’s managed MCP servers cover their entire database portfolio. AlloyDB for PostgreSQL workloads. Cloud SQL for MySQL, PostgreSQL, and SQL Server. Spanner for globally distributed data. Firestore for document databases. Bigtable for wide-column NoSQL. Google also announced plans for managed MCP support on Looker, Pub/Sub, Memorystore, and Database Migration Service.

The integration is genuinely seamless — within Google Cloud. You configure the MCP server endpoint in your agent’s configuration. No infrastructure to deploy. Enterprise-grade auditing, observability, and governance are built in. An agent can discover schemas, run queries, diagnose slow queries, and perform vector similarity searches.

But these MCP servers are Google Cloud services. They authenticate through Google Cloud IAM. They run within Google’s network. They assume your data is in AlloyDB, or Spanner, or Cloud SQL. If your PostgreSQL is on AWS RDS, or your MySQL is on-prem, or your data warehouse is Snowflake instead of BigQuery — you need a different MCP server.

Snowflake: Governed Access Within the Perimeter

Snowflake’s managed MCP server exposes Cortex Analyst for natural language to SQL over structured data, and Cortex Search for semantic search over unstructured content. It runs within Snowflake’s governance boundary with OAuth-based authentication. Interoperability is supported with Anthropic, CrewAI, Cursor, Salesforce, and IDE plugins.

The key phrase is “within Snowflake’s secure perimeter.” That’s a feature for Snowflake customers. It’s a wall for everyone else. An agent that can query Snowflake through Snowflake’s managed MCP server still can’t query the PostgreSQL database sitting next to it without an entirely separate MCP server from a different vendor with different auth, different config, and different operational characteristics.

Oracle: Built Into Autonomous Database

Oracle’s Autonomous AI Database MCP server shipped in March 2026. Agents can explore schemas, run queries, and iterate on analysis through standard MCP. It’s native to Oracle’s cloud infrastructure.

Same pattern. Excellent for Oracle shops. Irrelevant if your agent needs to query both Oracle and MySQL in the same workflow.

Anthropic: The Agent Runtime That Needs Your Data

Claude Managed Agents, launched April 8, is a different piece of the puzzle. It’s not a database MCP server — it’s a cloud-hosted agent runtime. Anthropic handles sandboxing, orchestration, scaling, and governance. You focus on agent logic. Notion, Rakuten, Asana, Sentry, and Allianz are already building on it.

But a managed agent runtime without managed data access is an engine without fuel. These cloud-hosted agents need to reach your databases. If your data is in Google Cloud, you use Google’s MCP servers. If it’s in Snowflake, you use Snowflake’s. If it’s in both, you configure both. If it’s in neither — if it’s in a self-hosted PostgreSQL, or a MySQL on a private network, or an SQLite file on an edge device — you’re on your own.

The Multi-Database Reality

Here’s the scenario that’s becoming standard across engineering organizations. You have:

  • A PostgreSQL database on AWS RDS for your application data
  • A Snowflake warehouse for analytics
  • A MySQL database that a legacy service depends on
  • An SQLite database embedded in a desktop application
  • A SQL Server instance that finance runs their reporting against

That’s five databases across three clouds and two on-prem environments. Not unusual. In fact, Gartner’s data shows that the average enterprise runs 3.4 different database engines in production.

To give your AI agents MCP access to all five, you need:

  1. PostgreSQL on RDS: No managed MCP server from AWS (yet). You’d use Google’s MCP Toolbox for Databases, an open-source community server, or build your own.
  2. Snowflake: Snowflake’s managed MCP server. Authenticate through Snowflake OAuth. Configure Cortex Analyst.
  3. MySQL on-prem: No managed option. Community server or DIY.
  4. SQLite: No vendor offers a managed MCP server for SQLite. Community server or DIY.
  5. SQL Server on-prem: No managed option outside of Google Cloud SQL’s MCP server, which requires your SQL Server to be in Google Cloud.

That’s five different MCP server configurations. At least three different authentication mechanisms. Three different operational models (managed vs. self-hosted). Three different sets of documentation, upgrade paths, and failure modes.

Your agent doesn’t care about any of this. It just wants to query data.

The Context Window Cost of Fragmentation

Perplexity’s CTO Denis Yarats flagged this at Ask 2026 in March: in a deployment with three MCP servers, tool schemas and protocol overhead consumed 143,000 of 200,000 available context tokens — 72% of the context window — before any user intent was processed. That left only 57,000 tokens for actual reasoning.

Perplexity moved away from MCP internally because of this. But the problem isn’t MCP. The problem is MCP server proliferation. Each MCP server you add dumps its full schema into the context window. The more vendor-specific MCP servers your agent connects to, the less context it has for actual work.

Five database vendors, five MCP servers, five sets of tool definitions — and your agent is spending most of its tokens describing its tools instead of using them.

One Binary, Any Database, One MCP Server

This is the problem Faucet was built to solve. Not the protocol problem — MCP solved that. The fragmentation problem.

faucet serve \
  --db "postgres://app:secret@rds-host:5432/production" \
  --db "snowflake://analytics:token@account.snowflakecomputing.com/warehouse" \
  --db "mysql://legacy:pass@10.0.1.50:3306/orders" \
  --db "sqlite:///opt/app/local.db" \
  --db "sqlserver://finance:pass@sql-host:1433/reporting"

Five databases. One Faucet instance. One MCP server. One set of tool definitions in the context window. One authentication layer. One operational model.

When an agent connects to Faucet’s MCP server, it sees every table across every connected database through a single, consistent interface:

faucet mcp --db "postgres://host:5432/app" --db "mysql://host:3306/orders"

The agent discovers schemas, runs queries, and performs CRUD operations across PostgreSQL, MySQL, SQL Server, Oracle, Snowflake, and SQLite — through one MCP server with one set of tool definitions. No context window bloat from duplicate tool schemas. No auth fragmentation.

And because Faucet also generates a full REST API with OpenAPI 3.1 documentation, the same configuration that powers your MCP server also gives you HTTP endpoints for services that don’t speak MCP:

# Same databases, REST API with OpenAPI docs
curl http://localhost:8080/api/v1/orders?status=shipped&_limit=50

# Full OpenAPI 3.1 spec, auto-generated
curl http://localhost:8080/api/openapi.json

RBAC That Works Across Vendors

Each vendor’s managed MCP server uses that vendor’s access control model. Google uses IAM. Snowflake uses roles and OAuth. Oracle uses database grants. When your agent needs access to all three, you’re defining permissions in three different systems with three different mental models.

Faucet applies one RBAC layer across all connected databases:

# Define access policy once
faucet config add-role analyst \
  --tables "production.customers:read,orders.*:read,reporting.*:read" \
  --filter "production.customers.region=us-east"

# Apply to any connection
faucet serve --db "postgres://..." --db "mysql://..." --role analyst

The analyst role can read customers (filtered to US East) from PostgreSQL, all tables from the MySQL orders database, and all tables from the SQL Server reporting database. One role definition. Applied consistently. No per-vendor configuration drift.

What About Google’s MCP Toolbox for Databases?

Google also offers MCP Toolbox for Databases (formerly Gen AI Toolbox for Databases), an open-source middleware layer that supports PostgreSQL, MySQL, SQL Server, and Spanner. It’s a legitimate multi-database option.

The difference: Toolbox is middleware that you deploy as a separate service. It requires its own infrastructure, its own scaling, its own monitoring. It’s designed for Google Cloud’s ecosystem — the Java SDK announcement makes it clear this is optimized for Google Cloud agent frameworks.

Faucet is a single binary. No middleware. No JVM. No cloud dependency. Download it, point it at your databases, and you have a REST API and MCP server running locally or on any server. It works on a laptop, in a Docker container, on a Raspberry Pi, or in any cloud — because it’s just a Go binary.

# Install and run. That's it.
curl -fsSL https://get.faucet.dev | sh
faucet serve --db "postgres://localhost:5432/mydb"

The OpenAI Migration Angle

OpenAI’s Assistants API sunsets on August 26, 2026. The migration path is the Responses API with native MCP support. Developers who’ve built agents on the Assistants API are now moving to an architecture that expects MCP servers for external tool integration.

This means a wave of agents that previously used OpenAI’s proprietary tool calling are about to start looking for MCP servers. Stripe, Shopify, and Twilio already have official MCP integrations for the Responses API. But database access? Most teams will need to stand up their own MCP server.

If you’re migrating from Assistants API to Responses API and your agent needs database access, the choice is:

  1. Deploy vendor-specific MCP servers for each database platform you use
  2. Build a custom MCP server from scratch
  3. Run faucet mcp --db "your://connection/string" and be done in 30 seconds

The migration deadline is four months away. Option 3 takes the database access problem off your plate entirely.

The Protocol Won. Don’t Let Vendors Own Your Access Layer.

MCP becoming the universal standard for agent-to-tool communication is genuinely good for the ecosystem. One protocol instead of dozens of proprietary integrations. Interoperability across Claude, ChatGPT, Gemini, Cursor, and every framework that implements the spec. This is how infrastructure should work.

But “universal protocol” doesn’t mean “universal access.” Each vendor’s managed MCP server is a feature of their platform, not a feature of the protocol. Using Snowflake’s MCP server requires being a Snowflake customer. Using Google Cloud’s MCP servers requires running your databases on Google Cloud. Using Oracle’s requires Oracle.

The protocol layer is open. The data access layer is getting walled off.

Faucet keeps both layers open. One binary. Six database engines. Full REST API. Full MCP server. Full RBAC. No vendor dependencies. Run it anywhere.

Getting Started

Install Faucet in one command:

curl -fsSL https://get.faucet.dev | sh

Connect to any PostgreSQL, MySQL, SQL Server, Oracle, SQLite, or Snowflake database and get a production-ready REST API with OpenAPI docs, RBAC, and a built-in MCP server — in under 60 seconds.

Your agents don’t care which cloud your data lives in. Your MCP server shouldn’t either.