As AI systems become more complex, developers are discovering that the database, not the model, is the real foundation of reliable AI. In this talk, I'll explore how Postgres can function as a full AI application server by combining Retrieval-Augmented Generation (RAG) with the Model Context Protocol (MCP).
We’ll walk through a real implementation: ingest pipelines, vector search, metadata ranking, caching, provenance tracking, and LLM tool-calling, all powered by Postgres. Then we’ll expose those capabilities over MCP so LLMs can safely query, transform, and orchestrate data.
The result: an end-to-end AI system where your RAG, your tools, your transforms, your logs, and your automation all live in Postgres.