On April 15, Microsoft’s Azure SQL Dev Corner blog quietly dropped a post titled “Introducing SQL MCP Server.” Most people read the headline as another vendor MCP server — Oracle had one, Google had one, now Microsoft has one. Move along.
But if you read past the headline, something much more interesting is happening. Microsoft’s SQL MCP Server isn’t a new product. It’s a feature of Data API Builder (DAB) 1.7 — the open-source runtime that Microsoft has been quietly shipping for the last two years to generate REST and GraphQL APIs from SQL databases.
In other words: Microsoft didn’t build a separate MCP server. They took their existing database-to-API generator, added an MCP transport on top, and shipped one binary that speaks REST, GraphQL, and MCP — all reading from the same entity definitions, enforcing the same RBAC rules, emitting the same telemetry, hitting the same cache.
If that architectural pattern sounds familiar, it should. It’s exactly what Faucet has been shipping since v0.1. The fact that Microsoft — with all of Azure’s weight behind it — independently arrived at the same conclusion is the single most validating event for the unified-data-API thesis we’ve seen this year.
Let’s unpack why this matters, what Microsoft got right, where they got stuck, and what it tells us about how the database tooling layer is going to look in 2027.
What Microsoft Actually Shipped
Data API Builder is a runtime that reads a JSON config file describing your database entities and stands up a REST/GraphQL API in front of them. It supports Microsoft SQL Server, PostgreSQL, MySQL, and Azure Cosmos DB. It handles authentication via Microsoft Entra and custom OAuth, enforces row-level RBAC, integrates with Azure Key Vault for secrets, and supports first- and second-level caching via Redis and Azure Managed Redis.
With DAB 1.7, that same runtime now exposes an MCP endpoint. A single dab start command boots a server that simultaneously serves:
- A REST API under
/api/{entity} - A GraphQL endpoint under
/graphql - An MCP server implementing the
2025-11-25spec
All three surfaces read from the same dab-config.json. All three enforce the same permissions policy. If you mark the Customer entity as read-only for a given role, that rule applies whether the request comes from a React frontend, a federated GraphQL gateway, or a Claude agent invoking a tool call.
This is a meaningfully different architectural pattern than what most MCP servers look like today. The first wave of MCP servers — the 10,000+ servers now indexed in public directories — were almost all purpose-built wrappers. Each one bolts onto a specific database driver, exposes a handful of query, list_tables, describe_schema tools, and calls it a day. They have no governance layer, no caching, no REST or GraphQL equivalent. They exist solely to give an LLM a way to poke at a database.
That architecture was fine for prototypes. It falls apart in production the moment you need to reason about “who queried what, when, why, and what did they change.” And it falls apart twice when you realize you now have two separate code paths into your database: one for your application (REST), and one for your agents (MCP), each with its own auth logic, its own audit trail, its own caching rules, and its own drift.
Microsoft’s DAB answer: collapse those paths. One runtime. One config. One RBAC model. Three protocols.
The Convergence Everybody Is Racing Toward
Microsoft isn’t the only vendor arriving here. Google Cloud announced managed MCP servers for its database portfolio last month, fronted by the same proxy that already handled connection pooling and IAM. Oracle’s AI Database Private Agent Factory — announced at Oracle Data Deep Dive NYC on April 15 — positions the Autonomous Database itself as the control plane, with MCP tools generated from the same schema definitions that power their APEX REST endpoints.
And on the open-source side, Bytebase’s dbhub ships as a zero-dependency MCP server that speaks Postgres, MySQL, SQL Server, MariaDB, and SQLite out of one binary. PostgREST maintainers have been debating how to add an MCP transport for six months.
The pattern across all of these is the same: the REST/GraphQL generator and the MCP server are the same runtime, reading the same schema, enforcing the same policy.
There’s a reason every serious player is landing on this design. It’s not because MCP is magic. It’s because the moment you take governance seriously, you discover that “humans calling REST” and “agents calling tools” are the same workload with different syntax. They both need:
- Authentication — a token, a role, an identity
- Authorization — row-level rules, column masks, operation allowlists
- Connection pooling — you can’t spin up a connection per agent invocation any more than you can per HTTP request
- Caching — agent workloads are even cache-friendlier than human ones, because agents retry and loop
- Observability — you need one unified audit trail, not two
- Rate limiting — agents will happily issue 4,000 queries in a minute if you let them
If you build a standalone MCP server, you end up reimplementing every single one of those layers. If you build it into the existing data API runtime, you get them for free. Microsoft’s insight, Oracle’s insight, Google’s insight, Faucet’s insight — same insight. Different logos.
Where Microsoft’s Answer Hits Limits
DAB is a legitimately good piece of software, and SQL MCP Server is a smart move. But if you look at what happens the moment a production team tries to use it outside the Azure walls, some obvious gaps open up.
Database coverage stops at four. DAB supports MSSQL, PostgreSQL, MySQL, and Cosmos DB. That covers a huge chunk of Microsoft-shop workloads. It does not cover Oracle, Snowflake, SQL Server on AWS RDS (technically MSSQL but with different auth semantics in practice), BigQuery, Databricks SQL, or any of the smaller engines that real enterprises have in their inventory. Most enterprise orgs we’ve talked to run 4–7 database engines. DAB covers the first two or three and then you’re back to bolting on standalone MCP servers for the rest — which is exactly the fragmentation problem the unified approach was supposed to solve.
The auth model is Entra-shaped. DAB supports “custom OAuth” but the default integrations — Key Vault, Entra, Managed Identity — are Azure-native. If you’re running on-prem, or on AWS, or in a hybrid setup, you end up writing more glue than you’d like. The RBAC rules live in dab-config.json in a format that’s specific to DAB. Migrating to or from DAB is a rewrite.
It’s a framework, not a single binary. DAB runs on .NET 8. To deploy it you’re looking at a container, a CLR runtime, or a managed Azure offering. That’s fine if you’re an Azure shop. It’s friction if you’re shipping a self-hosted appliance, an edge deployment, a customer-installable product, or anything where “one file, copy, run” is the distribution model.
The MCP surface is generic. DAB’s MCP server exposes broad CRUD tools per entity — roughly what you’d expect from auto-generated REST resources. It does not ship the two-tier tool strategy that the MCP community has been converging on (small set of navigation tools always present, per-entity typed tools loaded on demand via enable_table_tools). With 200 tables exposed, the context window tax adds up fast.
None of these are fatal flaws. They’re natural consequences of where DAB came from: an Azure-data-platform tool that grew a broader mandate. But they define the shape of the niche that’s left for focused, portable, multi-DB runtimes.
What This Means for the Rest of the Ecosystem
The first-order read on Microsoft’s launch is “another big vendor shipped MCP.” The second-order read is considerably more interesting: every major data vendor has now independently decided that the data API and the MCP server are one component. That’s the signal. The noise is which one ships first.
For anyone building in this space, three things fall out:
1. The “one MCP server per database” model is a dead end. If your architecture is 17 different servers with 17 different auth configs, 17 different audit trails, and 17 different context tax bills, you’re already losing to anyone running one unified runtime. We covered this theme on April 12 from the vendor-lock-in angle; Microsoft’s launch is the concrete proof.
2. The governance layer is now the product. RBAC, audit, caching, rate-limiting, secrets rotation, connection pooling — these are table stakes. The MCP tools are easy; the governance underneath is what’s hard. If you’re evaluating an MCP approach for production, the question is no longer “does it speak the protocol” (all of them do) but “does it have a real governance layer, or are you going to build one yourself.”
3. Portability is the open question. Microsoft, Oracle, and Google all want you inside their ecosystem. That’s rational — it’s how platforms work. The gap that opens up is for runtimes that do the unified-data-API-plus-MCP pattern without assuming you’ve picked a cloud. That’s the gap Faucet sits in.
How Faucet Compares
To be specific, since the Microsoft launch makes the shape of the comparison unusually clean:
| Dimension | DAB 1.7 / SQL MCP Server | Faucet |
|---|---|---|
| Databases supported | MSSQL, PostgreSQL, MySQL, Cosmos DB (4) | PostgreSQL, MySQL, SQL Server, Oracle, Snowflake, SQLite (6) |
| REST API | Auto-generated | Auto-generated |
| GraphQL | Yes | Roadmap (post-v1) |
| MCP server | Yes, via DAB 1.7+ | Yes, built-in |
| Runtime | .NET 8 container/service | Single static Go binary (~40 MB) |
| Config | dab-config.json | SQLite + CLI + embedded UI |
| Auth | Entra / custom OAuth | Local users, OAuth, SSO |
| RBAC | Yes (entity-level + row-level policies) | Yes (role + row filters) |
| Caching | Redis, Azure Managed Redis | In-process, Redis (pluggable) |
| Telemetry | OpenTelemetry → App Insights | OpenTelemetry → anywhere; PostHog product telemetry (opt-out) |
| Deployment | Container, App Service, AKS | `curl |
| License | MIT | Apache 2.0 |
Faucet’s value proposition against DAB is not that DAB is bad. DAB is good. It’s that the moment you’re outside the four databases Microsoft ships connectors for, or outside the Azure identity model, or you need “download one binary and run it” distribution, you want something else. That something else is Faucet.
A Concrete Example: The Same Database, Three Ways
Here’s what the unified data-API-plus-MCP pattern looks like with Faucet. Imagine you have a Postgres database with a customers table. One command:
faucet connect postgres://user:pass@db.internal:5432/crm
faucet start
Now this is running:
# REST
curl https://localhost:8080/api/v1/customers?filter=active=true&limit=50
# MCP (stdio, for Claude Desktop / Cursor)
faucet mcp
# MCP (HTTP, for remote agents)
curl https://localhost:8080/mcp/v1
Add an auth policy:
faucet role create analyst --read customers.name,customers.email --deny customers.ssn
That role now applies to all three surfaces. A human hitting the REST endpoint with an analyst token gets the same redacted view as a Claude agent calling the list_customers tool with an analyst API key. No duplicate config, no drift, no separate audit trails.
For an MCP client config snippet:
{
"mcpServers": {
"crm": {
"command": "faucet",
"args": ["mcp", "--profile", "analyst"]
}
}
}
And when you need typed per-table tools (the two-tier pattern) instead of generic CRUD:
faucet mcp enable-table-tools customers orders invoices
This is the unified pattern, minus the Azure dependency, working across six databases instead of four, shipped as a single static binary.
Zooming Out
The larger story of MCP adoption in 2026 has been: the protocol won, the tooling is catching up, and the governance story is the bottleneck. What Microsoft’s launch this week tells us is that the governance story is converging on a specific architectural answer — the data API runtime and the MCP server are the same runtime.
Every vendor that takes production deployments seriously is going to land here. Google is there. Oracle is there. Microsoft is now there. Postgres-native tools like PostgREST are going to be there within a quarter.
The question for the next 12 months isn’t whether this pattern wins. It already has. The question is which runtime you pick, and whether the one you pick is going to follow you when your database mix changes, your cloud changes, or your distribution model changes.
That’s the design brief Faucet has been solving for. Microsoft’s launch makes the category legible.
Getting Started
If you want to kick the tires:
curl -fsSL https://get.faucet.dev | sh
faucet init
faucet connect postgres://localhost/mydb
faucet start
Then point any MCP client — Claude Desktop, Cursor, Continue, Cline — at http://localhost:8080/mcp/v1 or run faucet mcp as a stdio command. You’ll get REST, GraphQL (soon), and MCP reading from the same governed surface, on top of six databases instead of four, in a single binary instead of a .NET service.
The whole thing is Apache 2.0. Issues, ideas, and benchmark numbers against DAB/Oracle/Google welcome at github.com/faucetdb/faucet.
Kevin is the creator of Faucet. Faucet is a single-binary database-to-API generator with first-class MCP support, built for teams who want the unified data-API pattern without the cloud lock-in.