At the MCP Dev Summit in New York last week, roughly 1,200 engineers watched a pattern crystallize in real time. Speaker after speaker — from Uber, Amazon Web Services, Docker, Kong, and Solo.io — converged on the same architectural conclusion: production AI agents need a gateway.
Not an API gateway. Not a reverse proxy. An MCP gateway — a purpose-built control plane that sits between your AI agents and the MCP servers they consume. It manages sessions, enforces auth, routes requests, provides observability, and gives your security team a single pane of glass into what every agent is doing across every tool.
This isn’t theoretical anymore. Uber’s agentic platform team revealed that they’ve built an internal MCP Gateway and Registry that automatically exposes thousands of internal Thrift, Protobuf, and HTTP endpoints to agents through MCP. Their gateway is the control plane for all agent-to-tool traffic across the company. At Uber’s scale — thousands of engineers, hundreds of services — the gateway isn’t a nice-to-have. It’s the thing that makes agent deployments governable.
The MCP Gateway market has responded accordingly. At least seven gateway products now exist, from Kong’s AI Gateway to Solo.io’s Gloo Gateway to dedicated MCP-first offerings from startups like MintMCP. Composio published architectural guides. API7 released reference implementations. WorkOS mapped the gateway pattern to MCP’s 2026 roadmap priorities.
The consensus is locked in. The question is no longer whether you need an MCP gateway. It’s what sits behind it.
What an MCP Gateway Actually Does
An MCP gateway is a specialized reverse proxy for the Model Context Protocol. It sits between AI agents (Claude Code, Cursor, custom agent frameworks) and the MCP servers those agents consume (database servers, file system servers, code analysis tools, SaaS integrations).
The gateway solves five specific problems that direct agent-to-server connections cannot:
1. Centralized Authentication and Authorization
Without a gateway, every MCP server handles its own auth. Server A uses API keys. Server B uses OAuth. Server C embeds credentials in the config. Your security team has no unified view of who can access what.
A gateway consolidates auth into a single layer. The agent authenticates once against the gateway. The gateway resolves the agent’s identity, checks permissions, and forwards the request to the appropriate backend server with the right credentials. The backend server never sees the agent’s raw token. The agent never sees the backend’s credentials.
This is the same pattern that API gateways established for microservices a decade ago. The MCP ecosystem is rediscovering it because the problems are identical — just with agents instead of frontend clients.
2. Observability and Audit Logging
The Gravitee State of AI Agent Security 2026 report found that only 24.4% of organizations have full visibility into which AI agents are communicating with each other. More than half of all agents run without any security oversight or logging.
A gateway fixes this by logging every request at the routing layer. Which agent called which tool, when, with what parameters, and what came back. This data feeds into your existing observability stack — OpenTelemetry, Datadog, Grafana — without requiring each MCP server to implement its own logging.
3. Request Routing and Discovery
When you have 20 MCP servers across an organization, agents need to find them. The MCP spec currently lacks a standard discovery mechanism (the .well-known metadata format is on the roadmap but not shipped). A gateway acts as the registry — agents discover available tools through the gateway, not by scanning for individual servers.
Uber’s implementation is the clearest example. Their MCP Registry catalogs every internal service endpoint. Their MCP Gateway exposes those endpoints to agents through MCP. An agent doesn’t need to know that the orders service speaks Thrift and the billing service speaks Protobuf. It just asks the gateway for “order lookup” and gets an MCP tool.
4. Rate Limiting and Resource Protection
AI agents are not like human users. A human might make 10 API calls in a session. An agent exploring a dataset might make 500 in a minute. Without rate limiting at the gateway, a single runaway agent can overwhelm a backend service — or run up a massive bill against a metered API.
5. Session Management
MCP’s current transport model creates stateful sessions. This fights load balancers and horizontal scaling. A gateway can manage session affinity, handle reconnections, and abstract the statefulness away from both the agent and the backend server.
The Triple-Gate Security Pattern
The most interesting architectural pattern to emerge from the gateway discussion is what security researchers are calling the “triple-gate” model — defense-in-depth for agent-to-tool communication with three distinct security layers:
Gate 1: Agent → LLM. Protects against prompt injection and PII leakage. This is where input sanitization and output filtering happen. If a user tries to trick an agent into running a malicious query, Gate 1 catches it before the LLM even processes the request.
Gate 2: LLM → MCP Server. Protects against unauthorized tool invocations. This is the MCP gateway itself. It validates that the agent has permission to call the requested tool, with the requested parameters, against the requested resource. Tool authorization, parameter validation, and policy enforcement all happen here.
Gate 3: MCP Server → Backend. Protects against over-privileged backend access. Even if a tool invocation passes Gate 2, the MCP server itself should have scoped credentials for the backend it connects to. A database MCP server shouldn’t use a superuser connection. A file system MCP server shouldn’t have write access to everything.
The triple-gate pattern matters because no single gate is sufficient. Gate 1 alone can’t prevent a tool from being called with valid but unauthorized parameters. Gate 2 alone can’t prevent a compromised MCP server from escalating privileges on the backend. Gate 3 alone can’t prevent an agent from calling tools it shouldn’t have access to. You need all three.
Where the Database Fits
Here’s what the gateway conversation consistently misses: a gateway routes traffic. It doesn’t generate it.
You can build the most sophisticated MCP gateway in the world — centralized auth, full observability, rate limiting, session management, the triple-gate pattern, the works. But if there’s no MCP server behind it that gives agents structured access to your database, the gateway is a router with nowhere to route.
And database access is, by volume, the most common thing agents need. The MCP Dev Summit made this clear. Uber’s gateway exposes service endpoints — many of which are thin wrappers around database queries. AWS’s agent patterns overwhelmingly involve data retrieval and manipulation. When Lucidworks launched their MCP server on April 8 — designed to reduce AI integration timelines by 10x — the core use case was connecting agents to enterprise data stores.
The Databricks State of AI Agents report found that multi-agent workflows grew 327% in under four months. Those agents aren’t just calling Slack and Jira. They’re querying databases. They’re reading customer records, checking inventory, pulling analytics, writing back results. The data layer is the most common backend behind any MCP gateway.
So the architectural question isn’t “gateway or no gateway.” It’s: what database MCP server sits behind your gateway, and does it handle the security and governance that Gate 3 requires?
Most Database MCP Servers Aren’t Gateway-Ready
The MCP ecosystem has over 10,000 servers indexed across public registries. Dozens of them offer some form of database access. Most of them look like this:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgres://admin:password@prod:5432/mydb"
}
}
}
}
This gets an agent connected to a database. It also gives the agent full access to every table, every row, and every column that the admin user can see. There’s no RBAC. There’s no audit logging. There’s no column-level filtering. The MCP server is a transparent pipe from the agent to the database, with whatever privileges the connection string carries.
Behind a gateway, this is Gate 3 wide open. The gateway handles auth at Gate 2 — it knows which agent is making the request. But the database MCP server behind it doesn’t distinguish between agents. Every request hits the same connection string with the same privileges. If agent A should only see the orders table and agent B should see orders plus customers, the community Postgres MCP server can’t enforce that. Both agents get everything.
This is the gap the gateway architecture exposes. Gateways push auth and routing upstream. But if the downstream server doesn’t support scoped access, the gateway’s auth decisions don’t translate to actual data-level enforcement.
What a Gateway-Ready Database Layer Looks Like
A database MCP server that works properly behind a gateway needs four things:
1. Role-Based Access Control at the API Layer
Different agents get different roles. Different roles see different tables, different columns, different operations. The MCP server enforces this regardless of what the gateway allows.
roles:
support_agent:
tables:
customers: [read]
orders: [read]
products: [read]
deny_columns:
customers: [ssn, credit_card]
analytics_agent:
tables:
orders: [read]
products: [read]
analytics_events: [read, write]
order_management_agent:
tables:
orders: [read, write]
order_items: [read, write]
inventory: [read]
This is Gate 3 enforcement. Even if the gateway forwards a request, the database layer validates that the requesting role has permission for the specific table and operation. An agent that somehow bypasses Gate 2 still can’t read the credit_card column if its role doesn’t allow it.
2. API Key Identity, Not Shared Credentials
Each agent (or agent role) authenticates to the database layer with its own API key. The database layer maps that key to a role. The role determines access. Revoking an agent’s access means revoking one API key — not rotating the database password that 15 other services depend on.
# Create a scoped API key for the support agent
faucet apikey create --role support_agent --name "support-bot-prod"
# → faucet_ak_8x2m...
# Create a different key for the analytics agent
faucet apikey create --role analytics_agent --name "analytics-pipeline"
# → faucet_ak_3j7n...
# Revoke the support agent without touching anything else
faucet apikey revoke faucet_ak_8x2m...
The gateway propagates the agent identity. The database layer maps that identity to permissions. Neither layer alone is sufficient — but together, they implement defense-in-depth from Gate 2 through Gate 3.
3. Audit Logging Per Request
Every query the agent runs — through REST or MCP — is logged with the API key identity, timestamp, table accessed, operation performed, and filter parameters. When your security team asks “what did the support agent query last Tuesday between 2pm and 4pm,” the answer is a log query, not an investigation.
4. Schema-Driven Tool Generation
A gateway-ready database MCP server shouldn’t require hand-written tool definitions for every table. When your schema changes — a new table, a new column, a renamed field — the MCP tools should update automatically. Otherwise you’re back to the configuration drift problem that gateways are supposed to solve.
Faucet Behind a Gateway
This is exactly what Faucet provides. One binary that sits behind your MCP gateway and handles the database layer with built-in RBAC, API key auth, audit logging, and automatic tool generation from your live schema.
# Install Faucet
curl -fsSL https://get.faucet.dev | sh
# Start Faucet pointed at your database
faucet serve --db "postgres://reader:${DB_PASS}@db.internal:5432/production"
Faucet now serves both REST and MCP endpoints. Your gateway routes agent traffic to Faucet. Faucet enforces Gate 3 — role-based access, column filtering, audit logging — regardless of what the gateway allows.
The architecture looks like this:
AI Agent (Claude Code, Cursor, custom)
│
▼
┌─────────────────────────┐
│ MCP Gateway │ ← Gate 2: auth, routing, rate limiting
│ (Kong, Solo.io, etc.) │
└─────────┬───────────────┘
│
┌─────┴──────┐
▼ ▼
┌────────┐ ┌────────────┐
│ Faucet │ │ Other MCP │
│ (DB) │ │ Servers │
│ │ │ (Slack, │
│ Gate 3 │ │ GitHub, │
│ RBAC │ │ Jira...) │
│ Audit │ │ │
└───┬────┘ └────────────┘
│
▼
┌──────────┐
│ Database │
│ (PG, MY, │
│ MSSQL) │
└──────────┘
Connect Faucet to your gateway using standard MCP transport:
{
"mcpServers": {
"production-db": {
"url": "http://faucet.internal:8080/mcp",
"headers": {
"X-API-Key": "${FAUCET_AGENT_KEY}"
}
}
}
}
Or if your agents connect directly through Claude Code:
claude mcp add production-db -- faucet mcp --db "postgres://reader:${DB_PASS}@db.internal:5432/production"
Either way, the agent gets structured, scoped, audited access to your database. The gateway handles routing. Faucet handles enforcement. The database stays untouched.
Multi-Database Behind a Single Gateway
The gateway pattern becomes especially powerful when you have multiple databases. Instead of routing agents to different vendor-specific MCP servers — one for PostgreSQL, another for MySQL, a third for SQL Server — you route them all to Faucet instances that serve a consistent interface.
# PostgreSQL application database
faucet serve --db "postgres://reader:${PG_PASS}@pg.internal:5432/app" --port 8081
# MySQL legacy billing system
faucet serve --db "mysql://reader:${MY_PASS}@my.internal:3306/billing" --port 8082
# SQL Server analytics warehouse
faucet serve --db "sqlserver://reader:${MS_PASS}@ms.internal:1433?database=warehouse" --port 8083
Your gateway routes production-app requests to port 8081, billing requests to port 8082, and analytics requests to port 8083. The agent interacts with all three through identical MCP tool schemas. Same filtering syntax. Same pagination model. Same RBAC enforcement. Three databases, one consistent API, one gateway.
Compare this to the vendor-fragmented approach: Google’s MCP Toolbox for the PostgreSQL instance, a community MySQL MCP server, and Microsoft’s Azure SQL MCP server for the warehouse. Three different tool schemas. Three authentication models. Three deployment stories. Your gateway can route to all three, but the agent still has to understand three different interfaces.
What About Teams Without a Gateway?
Not every team needs a gateway on day one. If you have five engineers and one database, a full gateway deployment is over-engineering.
Faucet works fine as a standalone MCP server. The RBAC, API key auth, and audit logging are built into the binary — they don’t require a gateway to function. You get Gate 3 enforcement even when Gate 2 doesn’t exist yet.
# Standalone: Faucet serves REST + MCP directly
faucet serve --db "postgres://user:pass@localhost:5432/mydb"
# Agent connects directly
claude mcp add my-db -- faucet mcp --db "postgres://user:pass@localhost:5432/mydb"
As your agent deployment scales — more agents, more databases, more teams — you can slot a gateway in front of Faucet without changing anything about the Faucet configuration. The gateway adds Gate 2. Faucet continues handling Gate 3. The transition from “direct connection” to “governed gateway architecture” is adding a routing layer, not rebuilding your database access.
This is the right adoption path: start with a governed database layer (Faucet), add a gateway when scale demands it. Not the other way around. A gateway without a governed database layer behind it is security theater — it looks good in the architecture diagram, but the agents still have unrestricted access to your data.
The Numbers Driving This
The convergence on the gateway pattern isn’t arbitrary. It’s driven by scale:
- 327% growth in multi-agent workflows on Databricks in four months
- 1,445% surge in enterprise inquiries about multi-agent systems (Gartner, Q1 2024 to Q2 2025)
- 10,000+ MCP servers across public registries
- 97 million MCP SDK installs
- Only 24.4% of organizations have full visibility into agent-to-agent communication (Gravitee)
- More than half of all agents run without security oversight or logging
These numbers describe an ecosystem that’s outgrown point-to-point connections. Gateways are the inevitable response — the same way API gateways became inevitable when microservices outgrew direct service-to-service calls.
But gateways don’t solve the last mile. They solve routing, auth, and observability at the network layer. The data layer — what agents can read, what they can write, what they can see — still requires enforcement at the source. That’s what Faucet provides.
Getting Started
You don’t need a gateway to get governed database access for your agents today. You need Faucet.
# Install Faucet — single binary, no dependencies
curl -fsSL https://get.faucet.dev | sh
# Start serving your database
faucet serve --db "postgres://user:pass@localhost:5432/mydb"
# Connect your agent
claude mcp add my-db -- faucet mcp --db "postgres://user:pass@localhost:5432/mydb"
Set up RBAC before your security team asks:
faucet role add ai-reader --description "Read-only agent access"
faucet role grant ai-reader mydb --tables customers,orders --verbs GET
faucet role deny-columns ai-reader mydb.customers --columns email,phone
faucet apikey create --role ai-reader --name "my-agent"
When you’re ready for a gateway, Faucet slots in behind it with zero configuration changes. Gate 2 handles routing. Gate 3 handles enforcement. Your database stays secure at every scale.
The MCP gateway is becoming the new API gateway. Make sure your database is ready for what sits behind it.