Back to Blog

82% of Organizations Are API-First — Except for Their Databases

The API-first revolution transformed microservices and mobile backends. But most databases still hide behind custom ORMs and raw SQL. With 354+ APIs per enterprise and 70% of web traffic flowing through APIs, it's time databases caught up.

Eighty-two percent of organizations have adopted an API-first development approach in 2026. The average enterprise manages 354+ APIs. Over 70% of web traffic in large enterprises now flows through API endpoints. GraphQL adoption in enterprise environments has grown 340% since 2023. REST plus OpenAPI is the unquestioned default for public and partner-facing interfaces.

These numbers describe a world where APIs won. Every meaningful system interaction — service-to-service, mobile-to-backend, partner integration, AI agent orchestration — happens through structured, documented, version-controlled API endpoints.

Except for one thing. The most critical data stores in every organization — relational databases — still don’t have APIs in front of them. They’re accessed through hand-rolled ORMs, raw SQL strings embedded in application code, ad-hoc Python scripts, and the occasional stored procedure that nobody wants to touch.

This is the gap. And it’s enormous.

How We Got Here

The API-first movement started where the pain was sharpest: microservices. When you decompose a monolith into 50 services, those services need to talk. HTTP APIs with JSON payloads became the lingua franca. OpenAPI specs became the contracts. API gateways became the control plane.

Mobile backends were next. Every iOS and Android app needs a backend API. The tooling matured — Swagger (now OpenAPI), Postman, API management platforms. Entire companies were built around the idea that APIs are products.

Third-party integrations followed. Stripe, Twilio, SendGrid — every successful SaaS company built its business on a well-documented REST API. If your API was bad, developers picked a competitor. API quality became a competitive advantage.

GraphQL pushed the boundaries further. That 340% enterprise adoption growth since 2023 reflects teams demanding more flexibility in how they query data across services. Federation, subscriptions, schema stitching — the tooling got serious.

But through all of this, databases sat untouched. The PostgreSQL instance with your customer data? Still accessed through a custom ORM in your Python service. The MySQL database powering your inventory system? Raw queries in a Go service. The SQL Server warehouse your analytics team depends on? A shared read-only credential and whatever query tool the analyst prefers.

No OpenAPI spec. No versioned endpoints. No access control at the API layer. No documentation beyond whatever comments exist in the application code.

The Numbers Behind the Gap

The database automation market tells the story from the demand side. It was valued at $2.35 billion in 2025 and is projected to reach $13.5 billion by 2032 — a 28.4% CAGR. Sixty-eight percent of enterprises have already adopted some form of database automation, whether that’s schema migration tools, automated backups, or query optimization.

But “database automation” in most organizations means DevOps automation. Provisioning, scaling, backup, restore. The plumbing. Not the interface.

The broader AI automation market — worth $169.46 billion in 2026 — is driving a related pressure. Intelligent automation delivers a 330% ROI over three years. But that ROI depends on automation having structured access to data. And structured access to data means APIs.

Here’s where the math gets uncomfortable. If your enterprise manages 354+ APIs, and those APIs cover your microservices, your mobile backends, your partner integrations, your payment processing, your notification systems, your authentication flows — how many of those APIs provide structured access to your actual database tables?

For most organizations: zero. Maybe one, if someone built a custom data service last year that covers a subset of tables. The data itself — the rows and columns that all those other APIs ultimately depend on — is locked behind application-specific access layers that each team builds from scratch.

What a Custom API Layer Actually Costs

Let’s be concrete about what “building a custom API layer” means in practice. Say you have a PostgreSQL database with 40 tables. Your team needs REST endpoints for CRUD operations, filtering, pagination, and sorting. You need OpenAPI documentation. You need basic access control.

Here’s what that looks like in a typical stack:

Week 1-2: Choose a framework. Express? FastAPI? Gin? Set up the project structure, ORM configuration, database connection pooling, error handling middleware. Write the first few endpoints by hand to establish patterns.

Week 3-4: Grind through the remaining tables. Each one needs create, read (single + list), update, delete. Each list endpoint needs filtering, sorting, pagination. Each write endpoint needs validation. You’re writing variations of the same code 40 times.

Week 5-6: OpenAPI spec. Either you hand-write it (tedious, drifts immediately) or you use code-first generation (adds a dependency, still needs review). Write tests. Set up CI. Deploy.

Week 7+: Maintenance. Schema changes require API changes. New tables need new endpoints. Column additions need validation updates. The ORM has a bug with a specific join pattern. A pagination query is slow on one table because the ORM generates a suboptimal query plan.

Six weeks for a developer — conservatively $30,000-$50,000 in loaded cost — to build something that works for one database, covers one project’s needs, and requires ongoing maintenance. Multiply by every team, every database, every project.

This is what 82% of organizations are doing. They’ve gone API-first for everything except the thing that matters most.

What Faucet Does Instead

Faucet points at your database and generates a full REST API instantly. No code generation. No scaffolding. No ORM. It reads your schema at startup and serves typed, documented endpoints in real time.

curl -fsSL https://get.faucet.dev | sh
faucet serve --db "postgres://user:pass@localhost:5432/mydb"

Two commands. Your database now has a complete REST API with CRUD operations, filtering, pagination, sorting, and an auto-generated OpenAPI 3.1 spec.

Here’s what that looks like in practice. Say you have an orders table:

CREATE TABLE orders (
  id SERIAL PRIMARY KEY,
  customer_id INTEGER REFERENCES customers(id),
  status VARCHAR(20) DEFAULT 'pending',
  total DECIMAL(10,2),
  created_at TIMESTAMP DEFAULT NOW()
);

Faucet introspects that schema and immediately serves these endpoints:

List orders with filtering and pagination:

curl "http://localhost:8080/api/mydb/orders?status=shipped&_limit=25&_offset=0&_sort=-created_at"
{
  "data": [
    {
      "id": 1042,
      "customer_id": 87,
      "status": "shipped",
      "total": 249.99,
      "created_at": "2026-04-07T14:23:11Z"
    },
    {
      "id": 1038,
      "customer_id": 12,
      "status": "shipped",
      "total": 89.50,
      "created_at": "2026-04-07T11:05:33Z"
    }
  ],
  "total": 847,
  "limit": 25,
  "offset": 0
}

Get a single record:

curl "http://localhost:8080/api/mydb/orders/1042"

Create a new order:

curl -X POST "http://localhost:8080/api/mydb/orders" \
  -H "Content-Type: application/json" \
  -d '{"customer_id": 87, "status": "pending", "total": 149.99}'

Update an existing order:

curl -X PUT "http://localhost:8080/api/mydb/orders/1042" \
  -H "Content-Type: application/json" \
  -d '{"status": "delivered"}'

Filter with operators:

# Orders over $100, created in the last 7 days
curl "http://localhost:8080/api/mydb/orders?total=gt.100&created_at=gte.2026-04-01"

Every one of these endpoints is live the moment Faucet starts. No code to write. No models to define. No routes to register.

OpenAPI: The Contract You Get for Free

The API-first philosophy isn’t just about having endpoints. It’s about having documented, machine-readable contracts that other systems can consume. That’s what OpenAPI provides — and it’s what custom API layers almost never maintain properly.

Faucet generates an OpenAPI 3.1 spec automatically from your database schema:

curl "http://localhost:8080/api/mydb/_openapi"

That returns a complete spec with:

  • Every table as a tagged resource group
  • Every column with its correct type, nullable flag, and constraints
  • Every endpoint with request/response schemas, parameter descriptions, and example values
  • Filter operators documented as query parameter enums
  • Pagination parameters with defaults and limits

You can feed this directly into Postman, import it into an API gateway, generate client SDKs with openapi-generator, or hand it to a frontend team so they know exactly what’s available. The spec stays accurate because it’s generated from the live schema — not from annotations that someone forgot to update three months ago.

# Generate a TypeScript client from Faucet's live OpenAPI spec
curl -s "http://localhost:8080/api/mydb/_openapi" > openapi.json
npx @openapitools/openapi-generator-cli generate \
  -i openapi.json \
  -g typescript-fetch \
  -o ./src/api-client

This is what API-first looks like for databases. Not hand-maintained Swagger files. Not code-first annotations that drift. A live spec that reflects reality.

Multi-Database, One Interface

The 354+ API number reflects a real organizational challenge: API sprawl. Every service has its own API, its own conventions, its own authentication model. Platform teams spend enormous effort trying to standardize.

Faucet addresses this for databases directly. One binary serves APIs for PostgreSQL, MySQL, SQL Server, Oracle, SQLite, and Snowflake — all through the same consistent interface.

# Register multiple databases
faucet connections add analytics \
  --driver postgres \
  --dsn "postgresql://readonly@analytics-db:5432/warehouse"

faucet connections add inventory \
  --driver mysql \
  --dsn "mysql://app:pass@inventory-db:3306/stock"

faucet connections add legacy \
  --driver sqlserver \
  --dsn "sqlserver://sa:pass@legacy-db:1433?database=erp"

faucet serve

Now every database has the same endpoint structure, the same filtering syntax, the same pagination model, the same OpenAPI spec format. A developer who knows how to query one Faucet endpoint knows how to query all of them. No context switching between different ORMs, different query builders, different API conventions.

# Same syntax, different databases
curl "http://localhost:8080/api/analytics/page_views?_limit=100&_sort=-view_count"
curl "http://localhost:8080/api/inventory/products?in_stock=true&_limit=50"
curl "http://localhost:8080/api/legacy/purchase_orders?status=approved&_sort=-order_date"

Three databases, three different engines, one consistent API. That’s the consistency that API-first promises but rarely delivers at the data layer.

RBAC: The Access Control Layer Databases Need

API-first without access control is just a different way to expose everything. Faucet includes role-based access control that operates at the API layer — finer-grained than database credentials, easier to manage than application-level authorization code.

# Create a read-only role for the analytics team
faucet roles create analyst \
  --allow "analytics.page_views:read" \
  --allow "analytics.sessions:read" \
  --deny "analytics.users:email,phone"

# Create an API key bound to that role
faucet apikeys create --role analyst --name "analytics-dashboard"

Now the analytics dashboard can read page views and sessions, but it can’t see email addresses or phone numbers in the users table. The access control is at the API layer, documented in the OpenAPI spec, and auditable through request logs. No changes to database credentials. No application code to review.

The Real Cost of Not Having This

Back to the numbers. The database automation market is growing at 28.4% CAGR because organizations are recognizing that manual database management doesn’t scale. The AI automation market delivers 330% ROI because structured, programmatic access to data unlocks automation that manual processes can’t match.

But those returns only materialize when the data is accessible through APIs. An AI agent can’t call a stored procedure through a connection string it doesn’t have. A workflow automation tool can’t query your database without an HTTP endpoint. A partner integration can’t pull order data without a documented API.

The 82% of organizations that went API-first made a bet that structured, documented, version-controlled interfaces are better than ad-hoc access. They were right — for microservices, mobile backends, and third-party integrations.

The bet hasn’t been applied to databases yet. Not because the logic is different, but because the tooling didn’t exist. Building a custom API layer for every database was too expensive, too slow, and too maintenance-heavy to justify — especially when the database “worked fine” with direct connections.

That calculus changes when you can do it in two commands.

Getting Started

Install Faucet and point it at any database. You’ll have a full REST API with OpenAPI documentation in under 30 seconds.

# Install Faucet
curl -fsSL https://get.faucet.dev | sh

# Start serving your database as a REST API
faucet serve --db "postgres://user:pass@localhost:5432/mydb"

# Or MySQL
faucet serve --db "mysql://user:pass@localhost:3306/mydb"

# Or SQL Server
faucet serve --db "sqlserver://sa:pass@localhost:1433?database=mydb"

Your database is now API-first. OpenAPI spec at /_openapi. Every table has CRUD endpoints with filtering, pagination, and sorting. No code to write. No models to maintain. No six-week project.

The other 82% of your organization already went API-first. Your databases should too.

Faucet is open source. Check out the GitHub repo, read the docs, or install with brew install faucetdb/tap/faucet.