Date: April 30, 2026
Time: 10:00 – 11:00 AM PDT
AI applications are moving from demos to production — and the biggest challenge isn’t the model. It’s context. Agents reason over long conversation histories. RAG pipelines combine vector embeddings with structured metadata. Multi-agent systems share state across workflows. Behind all of these patterns is the same requirement: a scalable, reliable way to store and retrieve context.
Most teams try to solve this by stitching together a vector database, a relational database, and a caching layer. But this fragmented approach leads to data duplication, sync issues, and growing operational complexity — and it’s where many AI projects break down on the path to production. What AI applications actually need is a context platform: a unified operational data layer that supports vector search, relational queries, and real-time analytics in a single system.
What you’ll learn:
- Define the context platform and explain why it’s becoming the critical infrastructure layer for AI applications — from RAG to multi-agent orchestration.
- Show how TiDB’s converged architecture handles vector search, relational queries, and real-time analytics in a single system, eliminating the need for a fragmented data stack.
- Walk through real-world patterns from companies using TiDB as their operational data layer for AI, including persistent agent context, hybrid retrieval, and tenant-level isolation.
- Demonstrate how TiDB Cloud’s instant branching capability lets teams prototype and scale AI workloads without over-provisioning.
Speakers:

Li Shen, Technology Evangelist, PingCAP
Li Shen is a technology evangelist and founding engineer at PingCAP, the company behind TiDB. He is a maintainer of several popular open-source projects including TiDB and TiKV, a distributed transactional key-value store and CNCF graduated project. Li has extensive experience in data infrastructure, software architecture design and cloud computing.