平凯数据库Tidb敏捷模式下初探
1.测试目的
通过Ticdc测试Tidb在标准模式和敏捷模式下数据同步情况,重点测试Tidb在敏捷模式的可行性等情况以及Tem对敏捷模式下Tidb的管理和负载情况。
Tidb的标准模式为Ticdc上游,Tem中部署的敏捷模式为Ticdc的下游。
通过tiup bench tpcc 对数据库进行压测,测试Ticdc上下游数据库同步情况。
注:”以下测试环境均在虚拟机中完成“
2.测试环境
2.1 Tidb集群模式
硬件环境, 3台 2CPU+4G 的虚拟机
软件环境,采用 v7.1.1版本 , 部署3pd + 3 tidb +3 tikv 节点
2.2 Tidb敏捷模式
硬件环境,TEM主机为2CPU+2G的虚拟机 ,集群管理中控机与集群主机部署在同一虚机下,配置8CPU+8G内存的虚拟机
软件环境,使用TEM 部署敏捷模式下的Tidb ,v7.1.8-5.2-20250630 版本,采用敏捷模式部署1PD+1Tidb+1Tikv节点
3. 配置Haprorxy
3.1 使用yum安装haproxy
yum install haproxy.x86_64
3.2 配置haproxy.cfg文件
global # 全局配置。
log 127.0.0.1 local2 # 定义全局的 syslog 服务器,最多可以定义两个。
chroot /var/lib/haproxy # 更改当前目录并为启动进程设置超级用户权限,从而提高安全性。
pidfile /var/run/haproxy.pid # 将 HAProxy 进程的 PID 写入 pidfile。
maxconn 4000 # 每个 HAProxy 进程所接受的最大并发连接数。
user haproxy # 同 UID 参数。
group haproxy # 同 GID 参数,建议使用专用用户组。
nbproc 1 # 在后台运行时创建的进程数。在启动多个进程转发请求时,确保该值足够大,保证 HAProxy 不会成为瓶颈。
daemon # 让 HAProxy 以守护进程的方式工作于后台,等同于命令行参数“-D”的功能。当然,也可以在命令行中用“-db”参数将其禁用。
stats socket /var/lib/haproxy/stats # 统计信息保存位置。
defaults # 默认配置。
log global # 日志继承全局配置段的设置。
retries 2 # 向上游服务器尝试连接的最大次数,超过此值便认为后端服务器不可用。
timeout connect 2s # HAProxy 与后端服务器连接超时时间。如果在同一个局域网内,可设置成较短的时间。
timeout client 30000s # 客户端与 HAProxy 连接后,数据传输完毕,即非活动连接的超时时间。
timeout server 30000s # 服务器端非活动连接的超时时间。
listen admin_stats # frontend 和 backend 的组合体,此监控组的名称可按需进行自定义。
bind 0.0.0.0:8080 # 监听端口。
mode http # 监控运行的模式,此处为 `http` 模式。
option httplog # 开始启用记录 HTTP 请求的日志功能。
maxconn 10 # 最大并发连接数。
stats refresh 30s # 每隔 30 秒自动刷新监控页面。
stats uri /haproxy # 监控页面的 URL。
stats realm HAProxy # 监控页面的提示信息。
stats auth admin:pingcap123 # 监控页面的用户和密码,可设置多个用户名。
stats hide-version # 隐藏监控页面上的 HAProxy 版本信息。
stats admin if TRUE # 手工启用或禁用后端服务器(HAProxy 1.4.9 及之后版本开始支持)。
listen tidb-cluster # 配置 database 负载均衡。
bind 0.0.0.0:8000 # 浮动 IP 和 监听端口。
mode tcp # HAProxy 要使用第 4 层的传输层。
balance leastconn # 连接数最少的服务器优先接收连接。`leastconn` 建议用于长会话服务,例如 LDAP、SQL、TSE 等,而不是短会话协议,如 HTTP。该算法是动态的,对于启动慢的服务器,服务器权重会在运行中作调整。
# 检测 4000 端口,检测频率为每 2000 毫秒一次。如果 2 次检测为成功,则认为服务器可用;如果 3 次检测为失败,则认为服务器不可用。
server tidb-1 192.168.2.71:4000 check inter 2000 rise 2 fall 3
server tidb-3 192.168.2.73:4000 check inter 2000 rise 2 fall 3
3.3 配置proxy-protocol
在tidb server中配置代理config 参数, reload 生效
proxy-protocol.networks: 192.168.2.71,192.168.2.73
3.4 重启Haproxy服务
[tidb@tidb01 ~]$ systemctl restart haproxy
[tidb@tidb01 ~]$ systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2025-09-22 20:56:51 CST; 2h 3min ago
Main PID: 1062 (haproxy-systemd)
Tasks: 3
CGroup: /system.slice/haproxy.service
├─1062 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
├─1077 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
└─1085 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
[tidb@tidb01 ~]$
3.5 测试Haproxy 连通性
[tidb@tidb01 ~]$ mysql -uroot -p -P8000 -h192.168.2.71 -A
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 413
Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Enterprise Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| employees |
| mysql |
| test |
+--------------------+
6 rows in set (0.01 sec)
MySQL [(none)]> \q
Bye
[tidb@tidb01 ~]$
4. 创建CDC同步服务
4.1 上游Tidb集群创建Ticdc组件
Ticdc组件的配置文件 cdc_scale_out.yaml,规划每个服务器均创建一个cdc组件
cdc_servers:
- host: 192.168.2.71
ssh_port: 22
port: 8300
gc-ttl: 86400
data_dir: "/u01/tidb-deploy/cdc-data"
- host: 192.168.2.72
ssh_port: 22
port: 8300
gc-ttl: 86400
data_dir: "/u01/tidb-deploy/cdc-data"
- host: 192.168.2.73
ssh_port: 22
port: 8300
gc-ttl: 86400
data_dir: "/u01/tidb-deploy/cdc-data"
使用tiup在集群上扩容Ticdc组件
[tidb@tidb01 ~]$
[tidb@tidb01 ~]$ tiup cluster scale-out mhxy cdc_scale_out.yaml
Ticdc组件扩容完成,查看集群服务,cdc服务正常。
[tidb@tidb01 ~]$ ti tidb
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.5/tiup-cluster display mhxy
Cluster type: tidb
Cluster name: mhxy
Cluster version: v7.1.1
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.2.72:2379/dashboard
Grafana URL: http://192.168.2.71:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.2.71:9093 alertmanager 192.168.2.71 9093/9094 linux/x86_64 Up /u01/tidb-data/alertmanager-9093 /u01/tidb-deploy/alertmanager-9093
192.168.2.71:8300 cdc 192.168.2.71 8300 linux/x86_64 Up /u01/tidb-deploy/cdc-data /u01/tidb-deploy/cdc-8300
192.168.2.72:8300 cdc 192.168.2.72 8300 linux/x86_64 Up /u01/tidb-deploy/cdc-data /u01/tidb-deploy/cdc-8300
192.168.2.73:8300 cdc 192.168.2.73 8300 linux/x86_64 Up /u01/tidb-deploy/cdc-data /u01/tidb-deploy/cdc-8300
192.168.2.71:3000 grafana 192.168.2.71 3000 linux/x86_64 Up - /u01/tidb-deploy/grafana-3000
192.168.2.71:2379 pd 192.168.2.71 2379/2380 linux/x86_64 Up /u01/tidb-data/pd-2379 /u01/tidb-deploy/pd-2379
192.168.2.72:2379 pd 192.168.2.72 2379/2380 linux/x86_64 Up|L|UI /u01/tidb-data/pd-2379 /u01/tidb-deploy/pd-2379
192.168.2.73:2379 pd 192.168.2.73 2379/2380 linux/x86_64 Up /u01/tidb-data/pd-2379 /u01/tidb-deploy/pd-2379
192.168.2.71:9090 prometheus 192.168.2.71 9090/12020 linux/x86_64 Up /u01/tidb-data/prometheus-9090 /u01/tidb-deploy/prometheus-9090
192.168.2.71:4000 tidb 192.168.2.71 4000/10080 linux/x86_64 Up - /u01/tidb-deploy/tidb-4000
192.168.2.72:4000 tidb 192.168.2.72 4000/10080 linux/x86_64 Up - /u01/tidb-deploy/tidb-4000
192.168.2.73:4000 tidb 192.168.2.73 4000/10080 linux/x86_64 Up - /u01/tidb-deploy/tidb-4000
192.168.2.71:20160 tikv 192.168.2.71 20160/20180 linux/x86_64 Up /u01/tidb-data/tikv-20160 /u01/tidb-deploy/tikv-20160
192.168.2.72:20160 tikv 192.168.2.72 20160/20180 linux/x86_64 Up /u01/tidb-data/tikv-20160 /u01/tidb-deploy/tikv-20160
192.168.2.73:20160 tikv 192.168.2.73 20160/20180 linux/x86_64 Up /u01/tidb-data/tikv-20160 /u01/tidb-deploy/tikv-20160
Total nodes: 15
4.2创建同步用户
在CDC同步任务的上下游数据库分别创建同步用户 ticdc;
MySQL [(none)]> create user ticdc@'%' identified by 'pingcap';
MySQL [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'ticdc'@'%' ;
MySQL [(none)]> create database pingcap;
Query OK, 0 rows affected (0.65 sec)
4.3 创建cdc同步任务
使用ticdc用户同步 pingcap 数据库下数据。
首先,在CDC的上游创建toml文件,用于创建changefeed任务
[tidb@tidb01 ~]$ cat changefeed.toml
# 指定配置文件中涉及的库名、表名是否为大小写敏感,该配置会同时影响 filter 和 sink 相关配置,默认为 true
case-sensitive = true
# 是否输出 old value,从 v4.0.5 开始支持,从 v5.0 开始默认为 true
enable-old-value = true
force-replicate = true
[mounter]
# mounter 解码 KV 数据的线程数,默认值为 16
# worker-num = 16
# 生产库,排除 mhxy*
[filter]
rules = ["pingcap.*"]
创建changefeed任务
[tidb@tidb01 ~]$ tiup cdc cli changefeed create --sink-uri="mysql://ticdc:pingcap@192.168.2.77:4000/" --config=changefeed.toml
tiup is checking updates for component cdc ...
Starting component `cdc`: /home/tidb/.tiup/components/cdc/v7.1.1/cdc cli changefeed create --sink-uri=mysql://ticdc:pingcap@192.168.2.77:4000/ --config=changefeed.toml
Create changefeed successfully!
ID: 2cd5a986-90fa-4bc9-9e9c-1dd268d5077a
Info: {"upstream_id":7542538698203629074,"namespace":"default","id":"2cd5a986-90fa-4bc9-9e9c-1dd268d5077a","sink_uri":"mysql://ticdc:xxxxx@192.168.2.77:4000/","create_time":"2025-09-27T20:51:23.76894628+08:00","start_ts":461105393466867718,"config":{"memory_quota":1073741824,"case_sensitive":true,"enable_old_value":true,"force_replicate":true,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["pingcap.*"],"event_filters":null},"mounter":{"worker_num":16},"sink":{"protocol":"","schema_registry":"","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"column_selectors":null,"transaction_atomicity":"","encoder_concurrency":16,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"file_index_digit":0,"enable_kafka_sink_v2":false,"only_output_updated_columns":null},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"storage":"","use_file_backend":false},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"}},"state":"normal","creator_version":"v7.1.1","resolved_ts":461105393466867718,"checkpoint_ts":461105393466867718,"checkpoint_time":"2025-09-27 20:51:23.623"}
[tidb@tidb01 ~]$
[tidb@tidb01 ~]$ tiup cdc cli changefeed list --server 192.168.2.71:8300
tiup is checking updates for component cdc ...
Starting component `cdc`: /home/tidb/.tiup/components/cdc/v7.1.1/cdc cli changefeed list --server 192.168.2.71:8300
[
{
"id": "2cd5a986-90fa-4bc9-9e9c-1dd268d5077a",
"namespace": "default",
"summary": {
"state": "normal",
"tso": 461105432657395713,
"checkpoint": "2025-09-27 20:53:53.123",
"error": null
}
}
]
[tidb@tidb01 ~]$
[tidb@tidb01 ~]$
[tidb@tidb01 ~]$ tiup cdc cli capture list --server 192.168.2.71:8300
tiup is checking updates for component cdc ...
Starting component `cdc`: /home/tidb/.tiup/components/cdc/v7.1.1/cdc cli capture list --server 192.168.2.71:8300
[
{
"id": "26b269d6-eedf-4408-b110-ac6a759b864d",
"is-owner": true,
"address": "192.168.2.72:8300",
"cluster-id": "default"
},
{
"id": "a15e02dd-c6e7-48cc-93d0-a78dc4aabe4a",
"is-owner": false,
"address": "192.168.2.73:8300",
"cluster-id": "default"
},
{
"id": "5fcc40a5-fd98-48ce-9bfb-49ec1ec97db6",
"is-owner": false,
"address": "192.168.2.71:8300",
"cluster-id": "default"
}
]
5. 使用bench测试
5.1 TPC-C测试
在上游数据库进程TPC-C简单测试,同步数据到tem的敏捷模式下。TPC-C 的prepare 阶段,上游集群持续大概25分钟完成,cdc完全同步到下游持续。
[tidb@tidb01 ~]$ tiup bench tpcc prepare -H 192.168.2.71 -P 8000 -D pingcap --user=root --password=root --warehouses 4 --parts 4
tiup is checking updates for component bench ...
Starting component `bench`: /home/tidb/.tiup/components/bench/v1.12.0/tiup-bench tpcc prepare -H 192.168.2.71 -P 8000 -D pingcap --user=root --password=root --warehouses 4 --parts 4
creating table warehouse
creating table district
creating table customer
creating table history
creating table new_order
creating table orders
creating table order_line
creating table stock
creating table item
load to item
load to warehouse in warehouse 1
load to stock in warehouse 1
load to district in warehouse 1
load to warehouse in warehouse 2
...
begin to check warehouse 3 at condition 3.3.2.4
begin to check warehouse 3 at condition 3.3.2.7
begin to check warehouse 3 at condition 3.3.2.9
begin to check warehouse 3 at condition 3.3.2.11
begin to check warehouse 3 at condition 3.3.2.2
begin to check warehouse 3 at condition 3.3.2.3
begin to check warehouse 3 at condition 3.3.2.5
begin to check warehouse 3 at condition 3.3.2.6
begin to check warehouse 3 at condition 3.3.2.8
begin to check warehouse 3 at condition 3.3.2.10
begin to check warehouse 3 at condition 3.3.2.12
begin to check warehouse 4 at condition 3.3.2.5
begin to check warehouse 4 at condition 3.3.2.6
begin to check warehouse 4 at condition 3.3.2.8
begin to check warehouse 4 at condition 3.3.2.10
begin to check warehouse 4 at condition 3.3.2.12
begin to check warehouse 4 at condition 3.3.2.2
begin to check warehouse 4 at condition 3.3.2.3
begin to check warehouse 4 at condition 3.3.2.7
begin to check warehouse 4 at condition 3.3.2.9
begin to check warehouse 4 at condition 3.3.2.11
begin to check warehouse 4 at condition 3.3.2.1
begin to check warehouse 4 at condition 3.3.2.4
Finished
[tidb@tidb01 ~]$
###CDC上游, pingcap数据库生成较小的数据量
MySQL [(none)]> SELECT
-> table_schema AS '数据库名',
-> ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS '数据库大小(MB)'
-> FROM
-> information_schema.tables
-> GROUP BY
-> table_schema
-> ORDER BY
-> SUM(data_length + index_length) DESC;
+--------------------+---------------------+
| 数据库名 | 数据库大小(MB) |
+--------------------+---------------------+
| pingcap | 383.96 |
| employees | 0.00 |
| METRICS_SCHEMA | 0.00 |
| mysql | 0.00 |
| INFORMATION_SCHEMA | 0.00 |
| PERFORMANCE_SCHEMA | 0.00 |
+--------------------+---------------------+
6 rows in set (2.90 sec)
观察上游集群cdc在granfana的状态
在tem中观察Tidb在敏捷模式下变化,在这里tidb-x 标识为标准模式。
图一 ,TEM整体运行
在TEM下,tidb-x 敏捷模式下运行情况
5.2 运行 TPC-C 测试
开启6个并发,run 10分钟的。 整体dml语句不多,下游敏捷模式下tidb 核心指标变化很小。
6.总结
- 在Tem部署的敏捷模式tidb数据库是可以做容灾方案,
- 展望Tidb的敏捷模式,应该是要适配更多业务场景,如数据库容器化;更多数据库功能,如接入AI特性等。