Book a Demo Start Instantly
TiDB 6.0

We are proud to present TiDB 6.0, the latest version of our open-source, distributed Hybrid Transactional and Analytical Processing (HTAP) database. This release significantly enhances TiDB’s manageability as an enterprise product and incorporates many of the essential features required for a cloud-native database. TiDB 6.0 provides the following major enhancements:

  • Introduces Placement Rules in SQL
  • Adds TiDB Enterprise Manager, a management component for enterprise clusters
  • Provides a preview of PingCAP Clinic, the intelligent diagnostic service
  • Significantly enhances the maintainability of the ecosystem tools
  • Provides more solutions to the hotspot issue

Taken together, these new features enable users, both cloud and on-premises, to have a smoother experience with TiDB, moving TiDB forward on the path to a mature enterprise-level cloud database.  

Enhanced manageability

A good database is easy to manage. As we developed TiDB 6.0, we reviewed feedback from our users and the market and summarized TiDB’s manageability issues. They include complicated and unintuitive daily management, lack of control over where data is stored, difficult to use ecosystem tools, and a lack of solutions for hotspots. TiDB 6.0 addresses these issues by enhancing the kernel, improving ecosystem tools, and introducing management components.

Autonomous data scheduling framework

TiDB 6.0 introduces Placement Rule in SQL, which opens up the data scheduling framework to users in the format of SQL. In the past, TiDB decided how data blocks were stored in nodes—regardless of the physical boundaries of data centers and the hardware differences between them. This prevented TiDB from providing flexible solutions when applications required multiple data centers, cold and hot data separation, or a large number of writes with buffer isolation.

Look at the following two scenarios:

  • Scenario 1: An application spans multiple cities with data centers in New York, Chicago, and Los Angeles. You hope to deploy TiDB across the three data centers to cover user groups in the northeastern, midwestern, and western regions. Earlier versions of TiDB clusters allowed such a deployment. However, data from different user groups is scattered in different centers evenly based on hotspots and data volume. Therefore, intensive data access across regions may suffer from high latency.
  • Scenario 2: To isolate the performance jitter caused by importing data directly to a target working node, you want to deploy a dedicated set of nodes for importing data and then move the imported data to the target working node. As an alternative,  you might want to use a set of nodes with lower configurations to store cold data for infrequent access. 

Before 6.0, no special solution was available to support such use cases.

In TiDB 6.0, the open data scheduling framework provides an interface to place data at the partition, table, or database level on any labeled node. With this interface, you can specify the target node of a partition or a table as you need it. In TIDB 6.0, you can add labels to a set of nodes and define placement constraints for the labels. For example, you can define a placement policy for all TiDB storage nodes located in the New York data center:

CREATE PLACEMENT POLICY 'newyork' CONSTRAINTS = "[+region=nyc]";

You can then apply the placement policy to a table named nyc_account:

CREATE TABLE nyc_account (ID INT) PLACEMENT POLICY = 'newyork';

This way, all the data in the nyc_account table will be stored in the New York data center, and data access requests to this table will be routed there.

Similarly, to save costs, you can label mechanical disk nodes to store less frequently accessed data and put old data partitions in low-cost nodes:


The Placement Rule in SQL feature is also useful for multi-tenant isolation. For example, users can assign data from different tenants to different nodes in the same cluster based on placement rules, and the loads of different tenants are automatically handled by the corresponding nodes. In this way, TiDB can isolate tenants while allowing data access among tenants under reasonable permission configurations. To learn more about this feature, see Placement Rules in SQL in the TiDB documentation.

Catching up with hotspot scenarios

Accessing hotspot data or lock conflicts can compromise application stability and experience in a distributed database. Handling hotspots has been annoying and challenging. To alleviate the hotspot pain, TiDB 6.0 rolls out a number of solutions.

Cached small tables

In some cases, user applications might operate simultaneously on a large table (for example, an order table) and many small tables (for example, exchange rate tables). While TiDB can easily distribute the load from the large tables, the small tables, whose data is also accessed with each transaction, often cause performance bottlenecks. To address the issue, TiDB 6.0 introduces a small table cache to explicitly cache small hotspot tables in memory. This feature greatly improves the access throughput and reduces read latency, especially for small tables that are frequently accessed but rarely updated. To learn more about this feature, see Cached Tables in TiDB documentation.

In-memory pessimistic locking

By caching pessimistic lock transactions, TiDB 6.0 greatly reduces the resource overhead in scenarios with pessimistic transactions. When this feature is enabled, it reduces CPU and I/O overhead by nearly 20%, and improves performance by 5% to 10%. To learn more about this feature, see In-memory Pessimistic Locking in TiDB documentation. 

Enhanced manageability in TiDB ecosystem tools

When you deal with mass task operations with TIDB ecosystem tools, command-line operations are inefficient and error-prone. It makes things harder to manage. To manage the entire data migration environment, TiDB 6.0’s Data Migration (DM) feature rolls out a web-based GUI management platform with the following capabilities:

  • An updated dashboard that displays the main monitoring information and status information of migration tasks in DM. Users can quickly learn the status of migration tasks and view key latency and performance metrics.
  • Migration management that helps users monitor, create, delete, configure, and duplicate tasks.
  • Source management that helps users manage the upstream configuration in a data migration environment, including creating and deleting the upstream configuration, monitoring the task status corresponding to the upstream configuration, and modifying the upstream configuration.
  • Replication management that enables users to view the detailed configuration and task status based on a specified filter, including the configuration information of the upstream and downstream, and the database and table names of the upstream and downstream.
  • Cluster management that helps users view the configuration information of the current DM cluster and the status information of each worker.

DM web GUI

To learn more about this feature, see DM Web GUI Guide.

Get TiDB Now  Request a Demo  

A new management platform and intelligent diagnostic toolkit

TiEM management platform

Prior to TiDB 6.0, users needed to use command lines for daily operation and maintenance.  Daily operations such as managing and monitoring multiple clusters with different configurations across applications could be a huge challenge. To simplify operations and reduce the chance of human error, TiDB 6.0 introduces TiDB Enterprise Manager (TiEM), a graphical management platform that integrates various components, including resource management, multi-cluster management, parameter group management, data import and export, and system monitoring.

TiEM interface

Through TiEM, users can perform daily operations on one interface. TiEM also provides monitoring and log management features that make it easier to inspect clusters. Users no longer need to switch between multiple toolkits and monitor pages. To learn more about TiEM, see Customize Configurations of Monitoring Servers in TiDB documentation.

PingCAP Clinic Intelligent Toolkit

Distributed systems can be complex, and diagnosing and resolving TiDB problems can be hard. The challenge is even bigger for TiDB clusters in cloud environments, where service providers handle users with different circumstances.  

With this in mind, TiDB 6.0 takes a big step toward becoming a self-service database. TiDB 6.0 introduces a preview version of PingCAP Clinic, an intelligent database diagnostic service. With TiDB Clinic, you can ensure the stable operation of your TiDB cluster for its full life-cycle, predict potential problems, reduce the chance of problems, troubleshoot cluster problems quickly, and fix cluster problems. Users can troubleshoot cluster problems remotely or perform a quick check on the cluster status locally. To learn more about this feature, see PingCAP Clinic Overview in the TiDB documentation.

Observability for non-experts

TiDB constantly tries to strengthen its observability so that users can better understand how their applications operate on TiDB and more accurately troubleshoot and tune the system. In previous releases, we introduced observability features such as Key Visualizer, SQL statistics, and Continuous Profiling. However, these are expert-oriented features; users must understand the system in some technical depth.

TiDB 6.0 changes that with Top SQL, a rookie-friendly observability feature that allows DBAs and application developers to observe and easily diagnose database performance—even if they’re not TiDB experts. As long as you’re familiar with basic database concepts such as indexing, lock conflict, and execution plans, you can use Top SQL to quickly analyze the database load and improve application performance. 

You can use Top SQL in the TiDB Dashboard without additional configuration. For more on Top SQL, see TOP SQL in TiDB documentation.  

Top SQL in TiDB Dashboard

More robust HTAP capability 

In TiDB 5.0, we delivered the initial version of the TiFlash analysis engine with a massively parallel processing (MPP) execution mode to serve a wider range of application scenarios. The latest version of TiFlash supports:

  • More operators and functions: TiDB 6.0 analysis engine adds over 110 built-in functions as well as some table-related operators. The TiDB analysis engine performance is substantially improved, which in turn benefits computing. 
  • An optimized thread model: In earlier versions of TiDB, there is little restraint on thread resource usage for the MPP mode. This could waste a large amount of resources when the system handled high-concurrency short queries. Also, when performing complex calculations, the MPP engine occupied a lot of threads, leading to performance and stability issues. To address this problem, TiDB 6.0 introduces a flexible thread pool and restructures the way operators hold threads. This optimizes resource usage in MPP mode and multiplies performance with the same computing resources in short queries and better reliability in high-pressure queries.
  • A more efficient column engine: By adjusting the storage engine’s file structure and I/O model, TiDB 6.0 not only optimizes the plan for accessing replicas and file blocks on different nodes, but also improves write amplification and overall code efficiency. According to test results from users, the concurrency capability has been improved by over 50% to 100% in high read-write hybrid workloads with CPU and memory resource usage dramatically reduced.

Compared with TiDB 5.0, TPC-C performance improves by 76.32%.

Enhanced disaster recovery capability

TiDB 6.0 also enhances the TiDB change data capture framework (TiCDC) with better disaster recovery (DR) capabilities. As a core DR component, TiCDC makes great strides in recovering massive cluster data in disaster scenarios by optimizing incremental data processing and tuning the speed of pulling transaction logs.

This release also optimizes several TiCDC processes, including extracting, sorting, loading, and delivering incremental data. For large-scale clusters, the optimization greatly improves the stability of data replication, reduces resource consumption, and shortens data latency. According to test data results obtained from a user, TiCDC can deliver the following performance: 

  • Latency: < 10 seconds for 99.9% data
  • RTO: < 5 minutes
  • RPO: < 10 minutes

Note that this performance is based on 10,000 upstream tables with fewer than 20,000 rows changed per second, and data volume changes less than 20 MB per second.

On the whole, the latency can be controlled within one minute if the nodes in the upstream TiDB cluster are upgraded or stopped as planned.

In addition, to reduce the impact of data replication on the performance of upstream clusters and deliver business-unaware data replication, TiCDC supports stream limiting on scanning transaction logs in primary clusters. In most cases, TiCDC has less than a 5% impact on the mean response time to QPS and SQL statements executed in upstream clusters.

Looking ahead

TiDB 6.0 brings TiDB closer to an enterprise-level HTAP database. But this release is also a new starting point for TiDB: it moves towards also being a cloud database. Features such as manageability, Placement Rules in SQL, and PingCAP Clinic auto-diagnostics apply to on-premises deployments, but they will actually have greater potential on the cloud. 

These are just a sampling of the highlights included in TiDB 6.0. For a full list of features, improvements, bug fixes, and stability enhancements see the Release Notes

If you’re currently not running TiDB, you can download TiDB 6.0 and give it a try

If you’re running an earlier version of TiDB and want to try 6.0, learn more in Upgrade TiDB Using TiUP

And all of you are welcome to join our community on Slack and TiDB Internals to share your thoughts with us.

Book a Demo

Spin up a Serverless database with 25GiB free resources.

Start Now

Have questions? Let us know how we can help.

Contact Us
TiDB Dedicated

TiDB Dedicated

A fully-managed cloud DBaaS for predictable workloads

TiDB Dedicated

TiDB Serverless

A fully-managed cloud DBaaS for auto-scaling workloads