In March 2026, The New Stack published a detailed report on MCP’s biggest production growing pains. The headline problems: stateful sessions that fight load balancers, no standardized audit trails, authentication tied to static secrets, undefined gateway behavior, and zero configuration management across server fleets.
These are real problems. They are not theoretical. Enterprises are hitting them right now, at scale. The report cites cases where 40+ developers end up running 200+ MCP connections within six weeks — with zero central visibility into what those connections are doing.
But here’s the thing nobody in the MCP discourse seems willing to say plainly: most of these problems are problems for people building custom MCP servers from scratch. If you’re using a purpose-built, production-tested MCP server for your specific use case, the calculus changes completely.
Let’s break down the four priority areas and see which ones actually apply to database access.
Problem 1: Stateful Sessions vs. Load Balancers
MCP’s current transport model creates a stateful session between client and server. The client connects, negotiates capabilities, and then issues tool calls against that persistent session. This is fine on a developer’s laptop. It’s a disaster behind a load balancer.
In a standard horizontal scaling setup, requests get distributed across multiple server instances. Stateful sessions break this. Request 1 goes to server A and establishes a session. Request 2 goes to server B, which has no idea about the session on server A. You’re stuck with sticky sessions, which defeats the purpose of load balancing, or you need an external session store, which adds latency and complexity.
The MCP roadmap acknowledges this. The protocol team is working on evolving the transport and session model to support horizontal scaling. The .well-known metadata format is coming to solve discovery. But these are future solutions. Production teams need answers now.
How Faucet sidesteps this: Faucet runs as a single process. One binary, one port, one process. For the vast majority of database MCP use cases — teams of 5 to 50 engineers building agents that query production data — you don’t need horizontal scaling of the MCP server itself. You need a fast, well-connected MCP server sitting close to your database.
Faucet handles concurrent connections within a single process using Go’s goroutine model and a tuned connection pool (100 max connections, 25 idle by default). That’s enough to serve hundreds of concurrent agent sessions without breaking a sweat. If you genuinely need to scale beyond that, you run multiple Faucet instances pointed at the same database — each one is stateless from the client’s perspective because the state is in the database, not the MCP server.
# One binary. One command. No session state to manage.
faucet serve --db "postgres://reader:${DB_PASS}@db.internal:5432/production"
The session problem is real for MCP servers that maintain in-memory state — file system watchers, code analysis engines, long-running workflows. A database MCP server has no reason to hold state. The database is the state.
Problem 2: Enterprise Deployment Gaps
This is the one that should scare you. The New Stack report identifies three specific gaps:
-
No standardized audit trails. The MCP spec doesn’t define how servers should log tool invocations. Most open-source MCP servers log nothing. When your security team asks “what queries did the AI agent run last Tuesday?”, you have no answer.
-
Auth tied to static secrets. The typical MCP configuration embeds API keys or database credentials directly in the client config. Those secrets are static, shared across environments, and rarely rotated. There’s no standard for token refresh, OAuth flows, or short-lived credentials.
-
Undefined gateway behavior. There’s no spec for how an MCP gateway should route requests, enforce policies, or aggregate traffic from multiple servers. Every vendor does it differently. Most don’t do it at all.
The enterprise numbers make this urgent. According to McKinsey’s 2025 Global Survey, 88% of organizations now use AI automation in at least one business function. But only 33% have scaled AI deployment beyond pilot programs. The gap between pilot and production is almost always governance — and MCP’s governance story is currently a blank page.
The 40+ developer scenario from The New Stack is telling: within six weeks, a mid-size engineering org can accumulate 200+ MCP connections with different credentials, different access patterns, and no centralized visibility. That’s shadow IT, but for AI agents.
How Faucet handles this:
Faucet ships with audit logging, RBAC, and API key management built in. Not as plugins. Not as enterprise add-ons. In the open-source binary.
# Create a read-only role for AI agents
faucet role add ai-agent --description "Scoped read access for production agents"
# Grant access to specific tables with specific operations
faucet role grant ai-agent production \
--tables customers,orders,products,inventory \
--verbs GET
# Hide PII columns from agent access
faucet role deny-columns ai-agent production.customers \
--columns ssn,credit_card,date_of_birth
# Generate a scoped API key
faucet apikey create --role ai-agent --name "support-bot-prod"
# → Created API key: faucet_ak_7f3b... (role: ai-agent)
Every request through Faucet — REST or MCP — is logged with the API key identity, timestamp, table accessed, operation performed, and filter parameters. When security asks what the agent did last Tuesday, you grep the log or query the audit table. Answer in seconds, not days.
The auth model isn’t tied to static database credentials. The agent authenticates to Faucet with an API key. Faucet authenticates to the database with its own connection. The agent never sees database credentials. You rotate API keys independently of database passwords. You revoke an agent’s access without touching the database at all.
# Revoke a compromised agent key without touching the database
faucet apikey revoke faucet_ak_7f3b...
# The database connection stays up. Other agents keep working.
# Only the compromised key loses access.
This is a fundamentally different architecture than embedding postgres://user:password@host in every agent’s MCP config. And it’s the architecture that enterprises need.
Problem 3: Discovery — Finding What a Server Does
The MCP spec currently has no standard way to learn what a server does without connecting to it. You connect, negotiate capabilities, and then discover the tool list. This makes it impossible to build catalogs, registries, or governance layers that can inspect servers before agents start using them.
The roadmap solution is a .well-known metadata format — a standard file that describes a server’s capabilities, required credentials, and supported tools without requiring a live connection. Good idea. Not shipped yet.
How Faucet handles this: Faucet auto-generates OpenAPI 3.1 documentation from your database schema, filtered through your RBAC rules. The OpenAPI spec is available at a static endpoint (/api/docs) and describes every table, every field, every operation, and every filter parameter — scoped to the requesting role.
This means your governance tools can inspect Faucet’s capabilities without connecting as an agent. Your service catalog can index the OpenAPI spec. Your security team can review the tool surface area before a single agent query runs.
# OpenAPI spec, filtered by role
curl -H "X-API-Key: ${AGENT_KEY}" https://faucet.internal:8080/api/docs
# Returns JSON describing exactly which tables, fields,
# and operations this key can access. Nothing more.
It’s not the .well-known format. But it solves the same problem today.
Problem 4: Configuration Drift
This is the silent killer. When you have 20 MCP servers across an organization, each configured independently, their configurations drift. One team upgrades their server and the tool schema changes. Another team changes a database password and forgets to update the MCP config. A third team adds a new table to the database and the MCP server doesn’t pick it up.
There’s no standard for MCP server configuration management. No Terraform provider. No Kubernetes operator. No GitOps workflow. Each server is a snowflake.
How Faucet handles this: There’s no configuration to drift. Faucet reads your database schema at startup and generates the API dynamically. Add a table to PostgreSQL, restart Faucet (or wait for the schema refresh interval), and the new table appears in the API and MCP tool list automatically. No config files to update. No tool definitions to write.
The only configuration that matters is your RBAC rules, which live in Faucet’s config store (~/.faucet/config.yaml). One file. Version it in Git. Apply it with faucet config apply. Done.
# ~/.faucet/config.yaml — the only config you manage
roles:
ai-agent:
description: "Scoped read access for production agents"
grants:
production:
tables: [customers, orders, products, inventory]
verbs: [GET]
deny_columns:
production.customers: [ssn, credit_card, date_of_birth]
order-writer:
description: "Write access for order management agent"
grants:
production:
tables: [orders, order_items]
verbs: [GET, POST, PUT]
Compare this to managing tool definitions for a custom MCP server where every table requires a hand-written tool with typed input schemas, validation logic, and permission checks. That’s where configuration drift starts. Faucet eliminates the surface area.
The Bigger Picture
At Uber, the engineering team has said it plainly: “MCPs are not just important, they really are what make AI useful at Uber.” That’s a company with thousands of engineers and some of the most demanding production requirements on the planet. MCP isn’t optional anymore. It’s infrastructure.
But infrastructure has to be production-grade. And right now, MCP isn’t. The protocol is evolving fast — the team is shipping improvements to transport, session management, and metadata discovery. But the spec alone doesn’t solve the problem. The server implementations have to be production-grade too.
Some vendors have already decided the gap is too wide. Perplexity moved away from MCP entirely, citing production reliability concerns. That’s a legitimate choice when you’re building a consumer-facing product that can’t tolerate the rough edges.
For database access, though, moving away from MCP means losing the single biggest advantage the protocol offers: a standard way for AI agents to discover and interact with structured data. Every major AI provider supports MCP. Every major agent framework supports MCP. Walking away from that ecosystem is expensive.
The better answer is to use MCP servers that have already solved the production problems. Not servers that expose the protocol’s rough edges. Servers that absorb them.
What Faucet Doesn’t Solve
Honesty matters more than marketing. Here’s what Faucet doesn’t address:
- Multi-server orchestration. If you need a gateway that routes across dozens of MCP servers, Faucet isn’t that. It’s one server for one (or more) databases.
- Non-database tools. Faucet is a database MCP server. If your agents need to call Slack, GitHub, or Jira, you still need separate MCP servers for those.
- The protocol itself. If MCP’s transport model changes in a backward-incompatible way, everyone — including Faucet — has to adapt. But that’s the protocol team’s problem to solve, and they’re working on it.
What Faucet does solve is the specific, high-value use case that accounts for the majority of enterprise MCP deployments: giving AI agents structured, governed, auditable access to relational databases.
Getting Started
Install Faucet in 10 seconds:
curl -fsSL https://get.faucet.dev | sh
Point it at your database:
faucet serve --db "postgres://user:pass@localhost:5432/mydb"
You now have a REST API at localhost:8080 and an MCP server at localhost:8080/mcp. Add it to Claude Code:
claude mcp add my-database \
--transport sse \
--url http://localhost:8080/mcp
Set up RBAC before your security team asks:
# Create a scoped role
faucet role add ai-reader --description "Read-only agent access"
faucet role grant ai-reader mydb --tables customers,orders --verbs GET
faucet role deny-columns ai-reader mydb.customers --columns email,phone
# Generate a key
faucet apikey create --role ai-reader --name "my-agent"
MCP’s production problems are real. They’re being worked on. But you don’t have to wait for the protocol to mature to ship governed database access for your AI agents. The tools exist today. Use them.
Faucet is open source. GitHub. Docs. Star the repo if this is useful.