Databases for Machines, Not People

Luke Wroblewski on Agentic Databases, Flipped Development, and the New AI Stack.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

Luke Wroblewski of Sutter Hill Ventures joins the podcast to discuss the paradigm shift in building AI applications. He explains how the traditional model of “code + database” is being replaced by “URL + model,” where the AI agent itself becomes the application logic. Luke dives into AgentDB, a database system designed for the scale and needs of AI agents, and explores how this new reality is flipping the software development lifecycle, placing building before designing. [This episode originally aired on Generative AI in the Real World, a podcast series I’m hosting for O’Reilly.]

Subscribe to the Gradient Flow Newsletter

Interview highlights – key sections from the video version:

Jump to transcript


Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a heavily edited excerpt, in Question & Answer format.

The New AI Application Paradigm

You’ve said that “a URL and a model is an app.” What exactly does this mean for teams building AI applications?

We’re experiencing a fundamental shift in application architecture. Traditional applications consist of running code that handles logic and interfaces, connected to a database for information storage. In the new paradigm, you essentially need just a URL (like an MCP server endpoint) that acts as an API to a data source, and an AI model that becomes the application itself. The model handles what the running code traditionally did – it can generate interfaces, handle logic, render outputs, and interact directly with data.

This is similar to previous platform shifts. Just like early websites looked nothing like today’s robust web applications, or how we laughed at the idea of running full desktop applications in browsers pre-Google Maps, we’re in the early phases where these AI-based applications may seem less sophisticated. But the definition of what constitutes an “application” is fundamentally changing.

How does this change the traditional split between running code and databases?

AI coding agents have become incredibly proficient at handling the “running code” part – they can write, debug, fix, and iterate on code rapidly. The bottleneck has shifted to the database layer. Most existing databases were designed for human operators with human-scale provisioning needs. They require configuration decisions about regions, compute, security settings, and are accessed through web forms and dashboards.

Once we have databases that work the way agents think and operate, you can almost eliminate the traditional coding layer entirely. The model can take on the role of the running code, directly querying and manipulating data without the need for hand-written application logic.

AgentDB: Infrastructure for Agent-First Development

What specific problem does AgentDB solve for AI development teams?

Traditional databases assume human operators and human scale. But we’re entering a world where agents need to create thousands of databases per day, often per task or per user. Databricks recently shared that they believe 99% of databases will be created by agents in the future. AgentDB is designed specifically for this machine-to-machine interaction at massive scale.

The core characteristics that make it work for agents include:

  • Instant, configuration-free creation: You only need a unique ID to create a database – no decisions about compute, regions, or security settings
  • File-based architecture: Each database is stored as a file, enabling filesystem-level scaling (scaling storage instead of compute) and providing full isolation
  • Context-rich templates: Every database comes with a template that provides AI models with everything they need – schema descriptions, sample queries, migration guidance, and formatting details

What’s the practical advantage of the template system for teams building AI applications?

Without templates, when you point an AI model at a traditional database, it has to burn tokens and time inferring the schema, understanding constraints, figuring out date formatting, and learning the structure before it can write a query. This process is error-prone and happens repeatedly in every session.

With AgentDB’s templates, models can write correct, functional queries immediately on the first token. This dramatically reduces latency, errors, and token consumption. The template bundles all the context – schema, descriptions, sample queries, migration guidance – that a model needs to be productive immediately.

Where does AgentDB fit in the database landscape – is it OLTP or OLAP?

AgentDB is designed for transactional workloads of applications, making it analogous to an OLTP system. It’s not intended for heavy analytics, logs, or data warehousing (OLAP) workloads. We currently support SQLite and DuckDB backends, which handle multiple tables and complex schemas well, though this isn’t designed for massive ERP systems with thousands of tables. The sweet spot is per-agent/per-task stores, lightweight CRMs, and personal or workflow data hubs.

Practical Implementation and Use Cases

Can you walk through concrete examples of what teams are building with AgentDB?

A simple but powerful use case starts with CSV files. Users upload CSVs – credit card statements, job trackers, lead spreadsheets – to agentdb.dev. The system instantly creates a database with a template and provides an MCP URL. They plug this into Claude, Cursor, or similar tools and immediately have a chat-based application for their data, asking complex questions their bank’s website could never answer.

A more sophisticated example: A go-to-market professional combined multiple data sources – website visitor logs, GitHub repo stars, contact form submissions – into AgentDB. He then used Claude to search the web for information about leads, enrich the database, prioritize leads based on custom criteria, organize by region, and assign to sales reps. It’s essentially a customized, agent-driven CRM built from CSV files in minutes – an “agentic Airtable” highly customized to specific workflows.

What’s a pragmatic path for practitioners to get started?

  1. Pick a narrow workflow (triage inbound leads, reconcile expenses, track job applications)
  2. Start with a CSV and convert it to an AgentDB with template
  3. Wire an agent (Claude/Cursor/Playground) to the MCP URL and script initial queries and updates
  4. Close the loop: Log outcomes, review failures, and iterate on the template and prompts
  5. Add guardrails: Access control, TTLs, and deletion policies from day one
  6. Plan for graduation: Decide when to move analytics to a warehouse and which pieces warrant durable services versus per-task databases

The Flipped Software Development Process

You’ve said AI is “flipping” the software development process. What does this mean for engineering teams?

Traditional enterprise development front-loads design and planning because building is expensive and time-consuming. Teams draw pictures, write specifications, estimate engineering effort, and get approvals before writing code.

With AI coding agents, you can build functional prototypes incredibly fast. The manifestation of the technology becomes the starting point for design decisions rather than the end result. You build first, then think about integration, design, and whether it’s even worth launching. The expensive thinking and design work happens after a functional version exists, not before.

What are the organizational implications for design and engineering teams?

In startups, engineers are already building features and presenting working demos before design reviews. Design teams are shifting from designing everything upfront to refining and integrating functionalities that have already been built – doing more of a “wash” across multiple built features to ensure coherence.

The friction points haven’t disappeared; they’ve just moved. Before, engineers would discover technical hurdles trying to implement picture-perfect mockups. Now, designers face user experience and consistency challenges when integrating fully-built features into coherent products. Both approaches have trade-offs, but the barrier to creating functional prototypes has dropped dramatically.

Should designers learn to code in this new world? What about engineers’ roles?

The barrier to entry for coding is now so low that there’s less excuse for designers not to build interactive prototypes. Understanding the medium you’re designing for always leads to better outcomes.

For engineers, this is perhaps the most exciting time ever. The ability to create and build systems is unparalleled. The key is being open to new ways of working with AI tools rather than resisting change – similar resistance happened with every platform shift, like when people said no one would watch movies on mobile devices.

What remains uncommoditized is “taste” – knowing what “good” looks like in terms of design, aesthetics, and user experience. This is difficult to teach to either AI models or people, and remains a critical human skill.

Infrastructure, Security, and Governance

Beyond databases, what other infrastructure needs to be rebuilt for AI agents?

Most of our core infrastructure was designed for human consumption, not for large language models. Search APIs are a perfect example – they return 10 blue links or snippets, structured like a search results page for humans to pick from. But AI models can process entire websites worth of information at once.

An AI-native search API shouldn’t return snippets; it should return the content of entire websites with rich metadata. We need to go down the list of core services and rethink them from the ground up for a world where the primary user is a machine, not a human with human limitations.

With agents creating thousands of databases, how should teams approach security and governance?

Every new technology introduces new attack vectors and challenges. This will be a cat-and-mouse game, similar to how email spam was once terrible but is now handled invisibly by Gmail. Teams need to actively implement:

  • Inventory and lineage: Track who/what created each database, its purpose, retention, and sensitivity level
  • Policy automation: Default TTLs, least-privilege access, and automatic teardown on task completion
  • Central evaluation loops: Instrument success metrics and error feedback across agents
  • Enterprise search fit for agents: Move beyond “10 blue links” to whole-corpus feeds with richer metadata

The isolated, file-based nature of AgentDB provides a different security model compared to large, monolithic databases, but vigilance is still required.

The Critical Role of Human Judgment

With agents building more agents and systems, how do you ensure quality and usefulness?

Simply deploying the technology is not enough. The real work is the continuous feedback and tuning loop. For example, I have a personal chatbot trained on my own writings, and the most important work isn’t the model or architecture – it’s constantly reviewing the questions people ask, evaluating the quality of answers, and course-correcting the system.

This process is about applying “taste” and defining what “good” looks like. Without this human-in-the-loop process of refinement, you just have a thousand agents running loose. To build truly great AI systems, someone needs to be constantly tweaking, tuning, and applying human perspective to guide the system toward higher quality and usefulness.

What’s the key takeaway for teams building AI applications today?

Treat models as first-class runtime and UI synthesizers, and treat databases as cheap, isolated, per-task state. Use templates to cut latency and errors. Shift your development process toward fast build-then-design cycles, but back it with strong governance, continuous evaluation, and human taste.

The entire stack is changing – lean into the parts that let you ship useful workflows quickly while you harden security and operations in parallel. Success isn’t just about deploying agents; it’s about continuously evaluating and improving their output to ensure they’re actually solving problems effectively.