Back to Blog

OpenAI Shipped 'Approved Tools Only' for Agents. Your Database Layer Is Next.

OpenAI's April 15 Agents SDK update adds an in-distribution harness that limits agents to approved tools inside a workspace. Microsoft's Agent Framework 1.0 did the same thing a day earlier. The industry is converging on scoped, governed agent access — and it only works if the database tier is scoped too.

On April 15, 2026, OpenAI shipped an update to its Agents SDK that quietly changed how enterprises will deploy agents for the next year. The headline feature is a new in-distribution harness that lets agents work with files and a predefined set of approved tools inside a workspace — nothing else. Not the open internet. Not whatever MCP servers happen to be registered globally. A fixed, admin-scoped set.

A day earlier, Microsoft shipped Agent Framework 1.0 with stable APIs, long-term support, and full MCP built in, plus a browser-based DevUI that shows every tool call an agent makes in real time.

Two of the three biggest agent platforms on earth shipped tool-scoping primitives in the same 48 hours. That is not a coincidence. That is an industry admitting, in unison, that unrestricted tool access was never going to survive contact with a Fortune 500 security review.

The catch: scoped tool access at the agent runtime does nothing if the tool — the database connector, the API, the internal service — is still a firehose underneath. The governance boundary has moved. Your data layer has to move with it.

What actually changed in the OpenAI Agents SDK

The new SDK ships three things worth reading closely.

First, the in-distribution harness. This is the piece most teams will underestimate. Historically, when you built an agent on OpenAI’s models, “tools” meant whatever functions you registered in code at request time. The model would happily call any of them. If a junior dev wired delete_user into the same agent that answers FAQs, nothing stopped a jailbreak from finding it. The harness changes that: tools now live inside a workspace-scoped manifest, reviewed and signed off by an admin, and the model is trained to refuse calls to anything not in the active manifest.

Second, file-scoped workspaces. Agents can only read and write files that have been explicitly mounted into their workspace. No more “the agent accidentally summarized the wrong tenant’s contract because both PDFs were in the same S3 bucket.”

Third, approved tools with side-effect tiers. Each tool in the manifest declares whether it’s read-only, write, or destructive, and the harness can require a human approval step for anything above read-only before it executes. This mirrors the tool annotations proposal in MCP that shipped in the v2.1 spec earlier this quarter.

If you squint, all three of these are the same feature in different clothing: least privilege, applied to agents the same way it has been applied to humans for thirty years.

Why the timing is not coincidental

Three data points from the last two weeks explain the convergence:

  • Lucidworks launched an enterprise MCP server on April 8 with a stated goal of reducing AI integration timelines by 10x and saving enterprises more than $150,000 per integration. That is marketing copy, but the pitch under it is that integration work today is security review work. The actual code is quick. The review is what takes months.
  • Codenotary launched AgentMon in the same window — a monitoring tool that tracks “agent behavior, file access, and data patterns” specifically so enterprises can spot data leaks and unauthorized access after the fact.
  • MCP v2.1 client parity landed this week. Claude Desktop and Cursor both shipped full v2.1 support, which means OAuth 2.1, dynamic client registration, and scoped consent flows are now table stakes across every major agent client.

You do not ship monitoring products for a problem that does not exist. You do not add scoped consent flows to a spec that is not being deployed in hostile environments. The enterprise rollout is happening, and the control surface is the tool layer.

The OpenAI SDK update is the biggest model provider telling enterprise buyers: we heard you, we are going to enforce this at the harness level, please stop blocking our deals.

The database is the last hop, and the weakest one

Here is where it gets uncomfortable for most stacks.

The standard enterprise agent architecture in 2026 looks roughly like this:

User

Agent runtime (OpenAI SDK / Microsoft Agent Framework / custom)
 ↓  tool call (scoped: yes)
MCP server / REST API
 ↓  SQL (scoped: usually not)
Database (access: full, or close to it)

The top two hops have been hardened. The bottom one, for most teams, has not.

A typical enterprise MCP server in production today connects to its database with a single service account that has read access to the entire schema. Sometimes it has write access too. The scoping that happens at the agent layer — “this agent can only call get_customer_orders” — is a contract, not an enforced boundary. If an attacker or a buggy agent tricks that tool into running an unbounded query, the database serves it. There is no second line of defense.

The production incidents from Q1 2026 bear this out. The pattern is always the same: agent gets scoped tool access, tool has a schema-wide connection, SQL injection or prompt injection turns a scoped tool call into a schema-wide read. The agent harness did its job. The database tier was never asked to do anything.

If the industry is converging on “approved tools only” at the agent layer, then at the database layer the question becomes: can the tool even do what it is now forbidden to be asked to do? If the answer is yes, you have defense in depth in name only.

What scoped database access actually looks like

The shape of the solution is well-understood. It is the same shape every enterprise data team has been building since the 1990s, just wired to respect agent identity instead of (or in addition to) human identity:

  1. Per-role connection strings. The agent’s tool gets a database connection scoped to its role, not the service account. Reads are reads. Writes require a write role. Destructive operations require a destructive role, which most agents never hold.
  2. Row-level and column-level filters applied at the data layer. If an agent is scoped to tenant_id = 47, that filter is enforced by the database or the API in front of it, not by the agent trusting itself to include the WHERE clause.
  3. Audit logging keyed to agent identity. Every query carries the agent’s identity, the tool it was invoked from, and the user or workflow that triggered it. When AgentMon flags an anomaly, you can trace it back to a specific call in a specific workspace.
  4. Tool-aware rate limits. A read-only FAQ tool does not need to execute 10,000 queries per minute. A backfill job does. They should not share a budget.

None of this is new. What is new is that the agent layer now expects it to exist. When OpenAI’s harness tells an admin “this tool is read-only,” and that admin checks the box, the admin is making a promise that only the database can actually keep.

How Faucet fits this model

Faucet was built with scoped agent access as a first-class concern, not a retrofit. A few things map directly onto what OpenAI and Microsoft shipped this week:

Roles scoped per table, per operation. When you generate an API with Faucet, every table gets auto-generated CRUD endpoints, and every endpoint is gated by a role. The service account that an agent’s MCP server uses does not have to be the database superuser — it can be a role that only sees orders for a specific tenant, only with SELECT, and only with a rate limit.

# Start Faucet on a Postgres database
faucet connect postgres \
  --dsn "postgres://readonly:***@db.internal:5432/app" \
  --service customer-api

# Create an agent-scoped role
faucet role create agent-faq \
  --permissions "SELECT:faqs,SELECT:articles" \
  --rate-limit "100/minute"

# Generate an API key bound to that role
faucet key create --role agent-faq --name "openai-faq-agent"

The resulting API key, when handed to an OpenAI agent as the bearer token for its approved tools, cannot make the agent do anything the role does not permit. If the tool manifest says “read-only,” the database connection is read-only, not just the tool wrapper.

MCP server with the same role model. Faucet’s built-in MCP server uses the same roles. A single binary generates both a REST API and an MCP server from the same database, and both go through the same policy engine. When Claude Desktop or Cursor connects via MCP v2.1 and presents a scoped token, the tools exposed to the agent are the tools its role can actually execute. No more, no less.

# Expose the same role over MCP
faucet mcp serve --port 8723 --role agent-faq

Claude Desktop picks this up as an MCP server, negotiates OAuth 2.1, and lists only the tools the role is scoped to. The agent does not have to be trusted to respect the scoping; the scoping is enforced at two layers.

Audit trail keyed to the agent, not the service account. Every request Faucet serves is logged with the API key, the role, the table touched, the operation, and the result size. When AgentMon or your internal SIEM flags a spike, the evidence is already in Faucet’s audit log, keyed to the specific agent identity.

Single binary, embedded UI, no microservice sprawl. The entire policy engine, role model, audit log, REST API, and MCP server ship as one Go binary. There is nothing to deploy alongside your agent infrastructure — Faucet goes between the agent and the database, and nothing else changes.

The practical checklist for this quarter

If you are running agents in production on OpenAI’s SDK, Microsoft Agent Framework, or any MCP-compatible client, this is the work list for the next 90 days:

  1. Inventory the connection strings. Find every database connection your agent tools hold. Most teams discover at least one that has far more privilege than the tool needs.
  2. Split the service account. At minimum, separate read-only from read-write. Ideally, generate one role per tool, scoped to the tables and operations that tool actually uses.
  3. Wire the agent identity into your audit log. You want to be able to answer “which agent ran this query” in less than thirty seconds when compliance asks.
  4. Match tool annotations to database permissions. If the MCP tool is annotated readOnlyHint: true, the role behind it must actually be read-only. Do not rely on the agent to enforce it.
  5. Test with a malicious prompt. Pick your highest-stakes agent, feed it a prompt that tries to exfiltrate another tenant’s data, and confirm the database — not just the agent — refuses.

This is not glamorous work. It is also the work that determines whether your agent deployment gets blessed by security review or sent back for another quarter.

The convergence is the story

Individually, the OpenAI SDK update, the Microsoft Agent Framework 1.0 release, the MCP v2.1 client parity, the Lucidworks enterprise MCP launch, and the AgentMon monitoring tool each look like ordinary product news.

Together, they are a single signal: the control plane for agents is settling on scoped tools and scoped identities, and the tooling vendors are competing on how cleanly they enforce it.

Whoever owns the database layer gets to decide whether that enforcement is real or theatrical. A tool manifest that says “read-only” and an agent runtime that refuses writes are both easy to bypass if the service account under the tool can write the whole schema. The only place scoping is actually enforceable is at the query surface.

That is the problem Faucet exists to solve, and that is why this week’s news moves the timeline forward for everyone building agents that touch production data.

Getting Started

One binary. Any Postgres, MySQL, SQL Server, Oracle, Snowflake, or SQLite database. REST API and MCP server scoped to roles, out of the box.

curl -fsSL https://get.faucet.dev | sh

# Connect to your database
faucet connect postgres --dsn "postgres://user:pass@host:5432/db"

# Create an agent-scoped role
faucet role create my-agent --permissions "SELECT:public.*" --rate-limit "1000/hour"

# Start the server (REST + MCP together)
faucet serve --port 8080

Your agent now has exactly the access you granted. Nothing more. That is how the next year of enterprise agent deployment is going to work — and it starts with the database tier catching up to where the agent layer already is.