Book a Demo Start Instantly

In the evolving landscape of conversational AI, Retrieval-Augmented Generation (RAG) has emerged as a powerful approach to enhance chatbot performance. By combining the strengths of retrieval and generation, RAG-based systems provide more accurate and contextually relevant responses. In this article, we will explore how to build a RAG-based chatbot using LlamaIndex and TiDB Vector Search, a MySQL-compatible database.

Why RAG?

Retrieval-Augmented Generation (RAG) leverages both retrieval mechanisms and generative models. The retrieval mechanism fetches relevant documents or data snippets in response to a query, while the generative model creates human-like responses based on the retrieved information. This combination ensures that the chatbot has access to precise data and can generate coherent and contextually appropriate responses.

Components of Our RAG-based Chatbot

  1. LlamaIndex: A library that facilitates the creation of indices for efficient retrieval.
  2. TiDB Vector Search: A MySQL-compatible, distributed database with advanced vector search capabilities.
  3. SimpleWebPageReader: A tool for loading and converting web page content into text format.

Setting Up the Environment

Before we dive into the code, ensure you have the following environment variables set up:

  • TIDB_USERNAME: Your TiDB username.
  • TIDB_PASSWORD: Your TiDB password.
  • TIDB_HOST: The host address of your TiDB instance.
  • OPENAI_API_KEY: Your OpenAI API key.


Running the Example

Clone the Repository

First, clone the repository containing the example code:

git clone
cd tidb-vector-python/examples/llamaindex-tidb-vector

Create a Virtual Environment

Next, create and activate a virtual environment:

python3 -m venv .venv
source .venv/bin/activate

Install Dependencies

Install the required Python packages:

pip install -r requirements.txt

Set Environment Variables

Set the necessary environment variables with your credentials:

export OPENAI_API_KEY="sk-*******"
export TIDB_HOST="gateway01.*******"
export TIDB_USERNAME="****.root"
export TIDB_PASSWORD="****"

Code Walkthrough

Below is the complete code to build our RAG-based chatbot.

#!/usr/bin/env python
import os

import click
from sqlalchemy import URL
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.tidbvector import TiDBVectorStore # type: ignore
from llama_index.readers.web import SimpleWebPageReader

# Define TiDB connection URL
tidb_connection_url = URL(
    query={"ssl_verify_cert": True, "ssl_verify_identity": True},

# Initialize TiDB Vector Store
tidbvec = TiDBVectorStore(
    vector_dimension=1536, # The dimension is decided by the model

# Create VectorStoreIndex and StorageContext
tidb_vec_index = VectorStoreIndex.from_vector_store(tidbvec)
storage_context = StorageContext.from_defaults(vector_store=tidbvec)
query_engine = tidb_vec_index.as_query_engine(streaming=True)

# Function to prepare data
def do_prepare_data(url):
    documents = SimpleWebPageReader(html_to_text=True).load_data([url,])
    tidb_vec_index.from_documents(documents, storage_context=storage_context, show_progress=True)

# Default URL for data loading
_default_url = ''

@click.option('--url', default=_default_url,
              help=f'URL you want to talk to, default={_default_url}')
def chat_with_url(url):
    while True:
        question = click.prompt("Enter your question")
        response = query_engine.query(question)

if __name__ == '__main__':

Explanation of the Code

  1. Importing Libraries: We start by importing the necessary libraries. os is used for environment variables, click for command-line interaction, and various modules from llama_index and sqlalchemy for handling the vector store and database connection.
  2. Defining TiDB Connection URL: We create a connection URL using the URL class from sqlalchemy. This URL includes the TiDB credentials and connection details.
  3. Initializing TiDB Vector Store: We instantiate TiDBVectorStore with the connection string, table name, distance strategy (cosine similarity), vector dimension, and an option to drop the existing table.
  4. Creating VectorStoreIndex and StorageContext: We create an index from the vector store and a storage context for managing data storage.
  5. Data Preparation Function: The do_prepare_data function loads data from a given URL, converts it to text, and stores it in the vector index.
  6. Command-Line Interaction: Using click, we define a command-line interface to allow users to specify a URL for data loading and interact with the chatbot by entering questions.
  7. Main Function: The chat_with_url function prepares data from the specified URL and enters a loop where it prompts the user for questions and returns responses from the query engine.

Running the Chatbot

To run the chatbot, save the code to a file, for example,, and execute it in your terminal:


You can specify a different URL by using the --url option. The chatbot will load the data from the given URL and be ready to answer your questions based on the retrieved information.

$ python --help
Usage: [OPTIONS]

  --url TEXT  URL you want to talk to,
  --help      Show this message and exit.
$ python
Enter your question: tidb vs mysql
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability. TiDB is designed to provide users with a one-stop database solution that covers OLTP, OLAP, and HTAP services. It offers easy horizontal scaling, financial-grade high availability, real-time HTAP capabilities, cloud-native features, and compatibility with the MySQL protocol and ecosystem.
Enter your question:


By integrating LlamaIndex with TiDB Vector Search, we can build a robust RAG-based chatbot that leverages the power of both retrieval and generation. This approach ensures that our chatbot provides accurate, relevant, and contextually appropriate responses. With TiDB’s advanced vector search capabilities, the system is scalable and efficient, making it suitable for a wide range of applications.

Last updated June 23, 2024

Spin up a Serverless database with 25GiB free resources.

Start Now