以下是通过 Docker Compose 部署两个 TiDB 敏捷模式单节点数据库,并配置 TiCDC 实现双向复制的详细步骤:
编辑 Dockfile 并构建 docker image
准备 tidbx-server、cdc、pd-ctl 文件,用于生成 docker image
tree -L 4 tidb-bidirectional/
tidb-bidirectional/
├── cdc
├── Dockerfile
├── pd-ctl
├── tidbx-server
编辑 Dockerfile
from ghcr.io/pingcap-qe/bases/tidb-base:v1.9.2
ADD tidbx-server /tidbx-server
ADD pd-ctl /pd-ctl
ADD cdc /cdc
RUN chmod +x /tidbx-server
RUN chmod +x /pd-ctl
RUN chmod +x /cdc
WORKDIR /
EXPOSE 4000 10080 2379 2380 20160 20180 8300
构建 tidbx docker image
# docker build -t tidbx:v7.1.8 .
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tidbx v7.1.8 c8159e657cdf About a minute ago 2.31GB
ghcr.io/pingcap-qe/bases/tidb-base v1.9.2 099851c162fb 5 months ago 319MB
上传到 Docker 镜像仓库,或保存为本地 docker 镜像文件
准备配置文件
最终的目录结构如下:
tree -L 4 tidb-bidirectional/
tidb-bidirectional/
├── cdc
├── compose.yaml
├── Dockerfile
├── etc
│ └── tidbx
│ └── conf
│ ├── cdc.toml
│ ├── pd.toml
│ ├── tidb.toml
│ └── tikv.toml
├── pd-ctl
├── TIDBX-202506_PoC.key
├── tidbx-server
└── var
└── lib
└── data
license.key
此处为 License Key 文件
pd.toml
[log]
level = "info"
[replication]
max-replicas = 1
enable-placement-rules = false
[dashboard]
internal-proxy = true
tikv.toml
[log]
[log.file]
max-backups = 7
[raftstore]
capacity = "20 GiB"
tidb.toml
[log]
[log.file]
max-backups = 7
cdc.toml
[log]
[log.file]
max-backups = 7
编辑 compose.yml 文件并启动 docker services
services:
tidb1:
image: tidbx:v7.1.8
privileged: true
container_name: tidb1
environment:
args: "--pd.name=pd-127.0.0.1-2379 \
--pd.client-urls=http://0.0.0.0:2379 \
--pd.advertise-client-urls=http://tidb1:2379 \
--pd.peer-urls=http://0.0.0.0:2380 \
--pd.advertise-peer-urls=http://tidb1:2380 \
--pd.data-dir=/var/lib/data/pd/data \
--pd.initial-cluster=tidb1-2379=http://tidb1:2380 \
--pd.config=/etc/tidbx/conf/pd.toml \
--pd.log-file=/var/lib/data/pd/log/pd.log \
--tikv.addr=0.0.0.0:20160 \
--tikv.advertise-addr=tidb1:20160 \
--tikv.status-addr=0.0.0.0:20180 \
--tikv.advertise-status-addr=tidb1:20180 \
--tikv.pd=tidb1:2379 \
--tikv.data-dir=/var/lib/data/tikv/data \
--tikv.config=/etc/tidbx/conf/tikv.toml \
--tikv.log-file=/var/lib/data/tikv/log/tikv.log \
--tidb.P=4000 \
--tidb.status=10080 \
--tidb.host=0.0.0.0 \
--tidb.advertise-address=tidb1 \
--tidb.store=tikv \
--tidb.initialize-insecure \
--tidb.path=tidb1:2379 \
--tidb.log-slow-query=/var/lib/data/tidb/log/tidb_slow_query.log \
--tidb.config=/etc/tidbx/conf/tidb.toml \
--tidb.log-file=/var/lib/data/tidb/log/tidb.log"
ports:
- "4008:4000"
- "12379:2379"
command: |
/bin/sh -c '
/tidbx-server $${args}
'
volumes:
- type: bind
source: ./TIDBX-202506_PoC.key
target: /license.key
- type: bind
source: ./etc/tidbx/conf
target: /etc/tidbx/conf
- type: bind
source: ./var/lib/data
target: /var/lib/data
- type: bind
source: ./etc/tidbx/conf/pd.toml
target: /etc/tidbx/conf/pd.toml
- type: bind
source: ./etc/tidbx/conf/tidb.toml
target: /etc/tidbx/conf/tidb.toml
- type: bind
source: ./etc/tidbx/conf/tikv.toml
target: /etc/tidbx/conf/tikv.toml
networks:
- tidb-network
ticdc1:
image: tidbx:v7.1.8
privileged: true
container_name: ticdc1
environment:
args: "--addr 0.0.0.0:8300 \
--advertise-addr tidb1:8300 \
--pd http://tidb1:2379 \
--data-dir=/var/lib/data/cdc/data \
--config /etc/tidbx/conf/cdc.toml \
--log-file /var/lib/data/cdc/log/cdc.log"
depends_on:
- tidb1
ports:
- "8305:8300" # API端口
command: |
/bin/sh -c '
/cdc server $${args}
'
volumes:
- type: bind
source: ./etc/tidbx/conf
target: /etc/tidbx/conf
- type: bind
source: ./var/lib/data
target: /var/lib/data
networks:
- tidb-network
networks:
tidb-network:
driver: bridge
运行 docker compose up -d 启用 docker services,包括 tidbx 和 cdc 两个服务
# docker compose up -d
# docker compose images
CONTAINER REPOSITORY TAG IMAGE ID SIZE
ticdc1 tidbx v7.1.8 ebfd0feb408b 2.4GB
tidb1 tidbx v7.1.8 ebfd0feb408b 2.4GB
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
827c0d3088a8 tidbx:v7.1.8 "/bin/sh -c '\n/cdc s…" 45 seconds ago Up 44 seconds 2379-2380/tcp, 4000/tcp, 10080/tcp, 20160/tcp, 20180/tcp, 0.0.0.0:8305->8300/tcp, [::]:8305->8300/tcp ticdc1
5270ed159a40 tidbx:v7.1.8 "/bin/sh -c '\n/tidbx…" 45 seconds ago Up 44 seconds 2379-2380/tcp, 8300/tcp, 10080/tcp, 20160/tcp, 20180/tcp, 0.0.0.0:4008->4000/tcp, [::]:4008->4000/tcp tidb1
验证数据库服务启动成功:
mysql -h xx.xx.xx.155 -P4008 -uroot
验证 cdc 服务启动成功:
curl -X GET http://xx.xx.x.155:8305/api/v2/status
搭建备节点 tidbx 及 cdc
在刚刚这台机器上保存 docker 镜像为 tidbx.tar
docker save tidbx > tidbx.tar
在新的节点上加载 tidbx.tar
docker load < tidbx.tar
# docker load < tidbx.tar
c689e1fc27e9: Loading layer [==================================================>] 256.6MB/256.6MB
0a3ca42f2a38: Loading layer [==================================================>] 1.123MB/1.123MB
91f79125d456: Loading layer [==================================================>] 58.17MB/58.17MB
a8e46b188612: Loading layer [==================================================>] 12.6MB/12.6MB
f0cf9fd91d47: Loading layer [==================================================>] 763.9MB/763.9MB
a4b20666878c: Loading layer [==================================================>] 43.5MB/43.5MB
2d3cd8e92418: Loading layer [==================================================>] 231.8MB/231.8MB
Loaded image: tidbx:v7.1.8
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tidbx v7.1.8 ebfd0feb408b 2 days ago 2.4GB
复制上述完整的 tidb-bidirectional 目录,并按照上述方法启动 docker services
docker compose up -d
docker compose logs
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
84311a77abf8 tidbx:v7.1.8 "/bin/sh -c '\n/cdc s…" 28 seconds ago Up 26 seconds 2379-2380/tcp, 4000/tcp, 10080/tcp, 20160/tcp, 20180/tcp, 0.0.0.0:8305->8300/tcp, :::8305->8300/tcp ticdc1
e51419624c97 tidbx:v7.1.8 "/bin/sh -c '\n/tidbx…" 28 seconds ago Up 27 seconds 2379-2380/tcp, 8300/tcp, 10080/tcp, 20160/tcp, 20180/tcp, 0.0.0.0:4008->4000/tcp, :::4008->4000/tcp tidb1
验证灾备节点启动成功:
mysql -h xx.xx.x.154 -P4008 -uroot
验证 cdc 服务启动成功:
curl -X GET http://xx.xx.x.154:8305/api/v2/status
启动与配置
关于 changefeed 更多配置参数,可参考 TiCDC OpenAPI v2
(1)在主节点创建同步链路(指向备节点)
curl -X POST -H "Content-Type: application/json" \
http://xx.xx.x.155:8305/api/v2/changefeeds \
-d '{
"changefeed_id": "node1-to-node2",
"sink_uri": "mysql://root@xx.xx.x.154:4008/",
"replica_config": {
"bdr-mode": true
}
}'
验证 changefeed 创建成功
curl -X GET http://xx.xx.x.155:8305/api/v2/changefeeds?state=normal
(2)在备节点创建同步链路(指向主节点)
curl -X POST -H "Content-Type: application/json" \
http://xx.xx.x.154:8305/api/v2/changefeeds \
-d '{
"changefeed_id": "node2-to-node1",
"sink_uri": "mysql://root@xx.xx.x.155:4008/",
"replica_config": {
"bdr-mode": true
}
}'
验证 changefeed 创建成功
curl -X GET http://xx.xx.x.154:8305/api/v2/changefeeds?state=normal
验证双向复制
# 在主节点创建测试数据
mysql -h xx.xx.x.155 -P4008 -uroot -e "USE test; CREATE TABLE t(id INT PRIMARY KEY); INSERT INTO t VALUES(1);"
# 在备节点查询数据
mysql -h xx.xx.x.154 -P4008 -uroot -e "USE test; SELECT * FROM t;"
# 在备节点插入数据
mysql -h xx.xx.x.154 -P4008 -uroot -e "USE test; INSERT INTO t VALUES(2);"
# 在主节点查询数据
mysql -h xx.xx.x.155 -P4008 -uroot -e "USE test; SELECT * FROM t;"
示例输出:
此方案实现了两个TiDB集群的双向异步复制,适用于多活数据中心、数据迁移等场景。生产环境建议增加监控和告警系统。
问题解答
- 如何查看 TiCDC 同步任务状态?
curl -X GET http://xx.xx.x.155:8305/api/v2/changefeeds/node1-to-node2
curl -X GET http://xx.xx.x.154:8305/api/v2/changefeeds/node2-to-node1