1
1
1
0
博客/.../

信创数据库选型|平凯数据库敏捷模式对标 MySQL数据库试用体验

 awker  发表于  2025-10-11

一、前言

1.1 行业介绍

从事 IT 集成及架构设计 15 载,感受到最近几年国产信创环境的替代需求,转向进军国产数据库领域。打算在此领域深耕,进行新的职场挑战。

目前一直服务于金融行业,熟悉金融行业各业务系统的架构。深知现有城商银行面临的信创转型机遇:客户必将在监管机构的要求下,来完成信创数据库的改造替代,这个目标已临近。

1.2 目前遇到的数据库挑战

如何完成从现有 ORACLE、DB2 数据库向国产信创数据库的迁移改造,将是客户面临的最终挑战。但在此之前,完成信创数据库的选型,将是首要任务。

目前遇到的主要难题有以下几点: 1)信创数据库种类多,如何选型,谁成熟,谁稳定,谁服务跟的上? 2)海量数据迁移问题 3)迁移后业务改造适配问题 4)并行双轨运行是否支持,回退是否安全可行 5)信创数据库性能问题、安全问题、稳定性问题

1.3 参加活动的原因

参加此次活动加深对TiDB数据库的理解,为客户的数据库选型提供解决方案。

1.4 敏捷模式的体验总结

敏捷模式通过TEM工具极大地简化了数据库的部署和运维复杂度。官方文档清晰易懂,使部署变得简单易用。

1.5 敏捷模式是否能应对该挑战

部署简单,管理易用,稳定性还需要时间及复杂业务场景验证。

二、TEM快速部署安装

2.1 环境介绍

主机名 CPU MEM 系统版本 TEM 版本 备注
tidb01 16 C 32 G RHEL 7.6 3.1.0 192.168.2.126

2.2 部署 TEM

1、解压安装包

# ls -1
amd64.zip
tem-amd64.tar

# tar xvf tem-amd64.tar

# cd tem-package-v3.1.0-linux-amd64/

mkdir -pv /tem/{run,data}

2、配置文件修改

修改config.yaml文件:

global:
  user: "tidb"
  group: "tidb"
  ssh_port: 22
  deploy_dir: "/tem/run"
  data_dir: "/tem/data"
  arch: "amd64"
  log_level: "info"
  enable_tls: false                # 是否开启TLS验证,开启后如果不配置证书密钥,则会生成自签证书和密钥

server_configs: # 这⾥指定 tem 节点的全局配置
  tem_servers:
  # 填写 metadb 配置的套接字,若配置了多个元数据库的地址,应以逗号分隔且确保⽆空格
    db_addresses: "127.0.0.1:4000"     
    db_u: "root"                   # 若由 tem 辅助创建的元数据库,请使⽤ root ⽤户
    db_pwd: ""                     # 若由 tem 辅助创建的元数据库,请使⽤空密码
    db_name: "test"                # 若由 tem 辅助创建的元数据库,请使⽤ test 数据库
    log_filename: "/tem/run/tem-server-8080/log/tem.log"
    log_tem_level: "info"
    log_max_size: 300
    log_max_days: 0
    log_max_backups: 0
    external_tls_cert: ""         # tem 对外使用的tls证书
    external_tls_key: ""          # tem 对外使用的tls密钥
    internal_tls_ca_cert: ""      # tem 内部节点间双向验证使用的tls证书
    internal_tls_cert: ""         # tem 内部节点间双向验证使用的tls证书
    internal_tls_key: ""          # tem 内部节点间双向验证使用的tls证书

tem_servers:
   - host: "0.0.0.0"                # 请填写 tem 节点的实际地址
     port: 8080
     mirror_repo: true            # 是否开启镜像仓库,多个TEM节点时,有且仅有一个节点开启

3、安装部署

直接执行 sh install. sh 安装脚本:

# sh install.sh 
#### Please make sure that you have edited user ####
#### in config.yaml as you want, and don't #
#### The user is 'tidb' by default, which will be ###
#### used as the owner to deploy TEM service.     ####
#### If you have installed TEM before, make sure  ####
#### that the user is consistent with the old one. #
#### After install.sh is run, a config.yaml file ####
#### will be generated under /home/<user>/, and  ####
#### please don't edit this file.                ####
Do you want to continue? [y/N] y
start installing
Check linux amd64
##### install version v3.1.0 #####
##### metadb_topology.yaml exists, will install metadb #####
##### add user tidb started #####
useradd: user 'tidb' already exists
Last login: Sat Oct 11 15:08:32 CST 2025
Generating public/private rsa key pair.
/home/tidb/.ssh/id_rsa already exists.
Overwrite (y/n)? ##### ssh-keygen ~/.ssh/id_rsa #####
##### cat ~/.ssh/authorized_keys  #####
##### user chmod .ssh finished #####
##### add user tidb finished #####
##### add PrivateKey /home/tidb/.ssh/id_rsa #####
##### add PublicKey /home/tidb/.ssh/id_rsa.pub #####
##### check depends tar started #####
Last login: Sat Oct 11 15:22:29 CST 2025
Dependency check check_ssh-copy-id proceeding.
Dependency check check_ssh-copy-id end.
Dependency check check_scp proceeding.
Dependency check check_scp end.
Dependency check check_ssh proceeding.
Dependency check check_ssh end.
##### check depends tar finished #####
##### check env started: before possible metadb installation #####
assistant check  {"level":"debug","msg":"ssh"}
strategyName Check 
NetWorkStatus Start: 1760167349
NetWorkStatus End  : 1760167352
Net Status , OK
SSHCheck 0.0.0.0 authorized_keys
id_rsa
id_rsa.pub
 
success
TEM assistant check {"level":"debug","msg":"ssh"} strategyName Check NetWorkStatus Start: 1760167349 NetWorkStatus End : 1760167352 Net Status , OK SSHCheck 0.0.0.0 authorized_keys id_rsa id_rsa.pub success
##### check env finished: before possible metadb installation #####
##### deploy_tiup  started #####
##### prepare TiDB AND TEM TIUP_HOME started #####
##### mkdir /tem/run/.tiup finished #####
##### mkdir /tem/run/.tem finished #####
##### mkdir /tem/run/.tiup/bin finished #####
##### mkdir /tem/run/.tem/bin finished #####
##### mkdir /tem/data/tidb-repo finished #####
##### mkdir /tem/data/tem-repo finished #####
##### mkdir /tem/run/monitor finished #####
##### mkdir /home/tidb/.temmeta finished #####
##### mkdir /home/tidb/.temmeta/bin finished #####
##### mkdir /home/tidb/tidb-repo finished #####
##### prepare TiDB AND TEM TIUP_HOME finished #####
##### deploy_binary started #####
##### mkdir /tem/run/tem-package finished #####
##### mkdir /tem/run/tem-package/tem-toolkit-v3.1.0-linux-amd64 finished #####
##### mkdir /tem/run/tem-package/tidb-toolkit-v3.1.0-linux-amd64 finished #####
##### deploy_binary finished #####
##### install tiup binary to /tem/run/.tiup/bin /tem/run/.tem/bin started #####
##### install tiup binary to /tem/run/.tiup/bin /tem/run/.tem/bin finished #####
##### install TiUP to /usr/local/bin started #####
##### install TiUP to /usr/local/bin finished #####
##### init tem mirror /tem/data/tem-repo started #####
##### build_mirror started #####
##### build_mirror repo_dir /tem/data/tem-repo #####
##### build_mirror home_dir /tem/run/.tem #####
##### build_mirror tiup_bin /usr/local/bin/tiup #####
##### build_mirror toolkit_dir /tem/run/tem-package/tem-toolkit-v3.1.0-linux-amd64 #####
##### build_mirror deploy_user tidb #####
Last login: Sat Oct 11 15:22:29 CST 2025
./
./local_install.sh
./tem-v3.1.0-linux-amd64.tar.gz
./node-exporter-v1.2.2-linux-amd64.tar.gz
./1.prometheus.json
./1.alertmanager.json
./grafana-v7.5.15-linux-amd64.tar.gz
./1.grafana.json
./keys/
./keys/b7a47c72bf7ff51f-root.json
./keys/8d862a21510f57fe-timestamp.json
./keys/92686f28f94bcc9c-snapshot.json
./keys/cd6238bf63753458-root.json
./keys/d9da78461bae5fb8-root.json
./keys/87cc8597ba186ab8-pingcap.json
./keys/c181aeb3996f7bfe-index.json
./root.json
./1.tiup.json
./1.tem-server.json
./1.index.json
./1.root.json
./alertmanager-v0.23.0-linux-amd64.tar.gz
./prometheus-v2.29.2-linux-amd64.tar.gz
./tiup-v1.14.0-linux-amd64.tar.gz
./tem-server-v3.1.0-linux-amd64.tar.gz
./tiup-linux-amd64.tar.gz
./snapshot.json
./1.tem.json
./timestamp.json
./1.node-exporter.json
Successfully set mirror to /tem/data/tem-repo
##### init tem mirror /tem/data/tem-repo finished #####
##### init tidb mirror /tem/data/tidb-repo started #####
##### build_mirror started #####
##### build_mirror repo_dir /tem/data/tidb-repo #####
##### build_mirror home_dir /tem/run/.tiup #####
##### build_mirror tiup_bin /usr/local/bin/tiup #####
##### build_mirror toolkit_dir /tem/run/tem-package/tidb-toolkit-v3.1.0-linux-amd64 #####
##### build_mirror deploy_user tidb #####
Last login: Sat Oct 11 15:22:34 CST 2025
./
./local_install.sh
./prometheuse-v6.5.0-linux-amd64.tar.gz
./1.br-ee.json
./alertmanager-v0.17.0-linux-amd64.tar.gz
./1.alertmanager.json
./br-v7.1.5-linux-amd64.tar.gz
./1.cluster.json
./br-v8.1.0-linux-amd64.tar.gz
./1.tikv.json
./br-v6.1.7-linux-amd64.tar.gz
./keys/
./keys/bd73fb49a9fe4c1f-root.json
./keys/ccb014427c930f35-root.json
./keys/0ef7038b19901f8d-root.json
./keys/e80212bba0a0c5d7-timestamp.json
./keys/cdc237d3f9580e59-index.json
./keys/704b6a90d209834a-pingcap.json
./keys/37ae6a6f7c3ae619-root.json
./keys/42859495dc518ea9-snapshot.json
./keys/6f1519e475e9ae65-root.json
./keys/bfc5831da481f289-index.json
./keys/4fa53899faebf9d9-root.json
./keys/928bfc0ffaa29a53-timestamp.json
./keys/7b99f462f7931ade-snapshot.json
./keys/24f464d5e96770ca-pingcap.json
./1.prometheuse.json
./br-v6.5.10-linux-amd64.tar.gz
./root.json
./1.tidb.json
./1.tiup.json
./1.pd.json
./1.index.json
./1.root.json
./tikv-v6.5.0-linux-amd64.tar.gz
./pd-v6.5.0-linux-amd64.tar.gz
./cluster-v1.14.0-linux-amd64.tar.gz
./blackbox_exporter-v0.21.1-linux-amd64.tar.gz
./5.br.json
./br-v7.5.2-linux-amd64.tar.gz
./tiup-v1.14.0-linux-amd64.tar.gz
./1.node_exporter.json
./1.insight.json
./tiup-linux-amd64.tar.gz
./insight-v0.4.2-linux-amd64.tar.gz
./br-ee-v7.1.1-3-linux-amd64.tar.gz
./snapshot.json
./node_exporter-v1.3.1-linux-amd64.tar.gz
./tidb-v6.5.0-linux-amd64.tar.gz
./1.blackbox_exporter.json
./timestamp.json
Successfully set mirror to /tem/data/tidb-repo
##### init tidb mirror /tem/data/tidb-repo finished #####
##### init temmeta mirror /home/tidb/tidb-repo started #####
##### build_mirror started #####
##### build_mirror repo_dir /home/tidb/tidb-repo #####
##### build_mirror home_dir /home/tidb/.temmeta #####
##### build_mirror tiup_bin /usr/local/bin/tiup #####
##### build_mirror toolkit_dir /tem/run/tem-package/tidb-toolkit-v3.1.0-linux-amd64 #####
##### build_mirror deploy_user tidb #####
Last login: Sat Oct 11 15:22:37 CST 2025
./
./local_install.sh
./prometheuse-v6.5.0-linux-amd64.tar.gz
./1.br-ee.json
./alertmanager-v0.17.0-linux-amd64.tar.gz
./1.alertmanager.json
./br-v7.1.5-linux-amd64.tar.gz
./1.cluster.json
./br-v8.1.0-linux-amd64.tar.gz
./1.tikv.json
./br-v6.1.7-linux-amd64.tar.gz
./keys/
./keys/bd73fb49a9fe4c1f-root.json
./keys/ccb014427c930f35-root.json
./keys/0ef7038b19901f8d-root.json
./keys/e80212bba0a0c5d7-timestamp.json
./keys/cdc237d3f9580e59-index.json
./keys/704b6a90d209834a-pingcap.json
./keys/37ae6a6f7c3ae619-root.json
./keys/42859495dc518ea9-snapshot.json
./keys/6f1519e475e9ae65-root.json
./keys/bfc5831da481f289-index.json
./keys/4fa53899faebf9d9-root.json
./keys/928bfc0ffaa29a53-timestamp.json
./keys/7b99f462f7931ade-snapshot.json
./keys/24f464d5e96770ca-pingcap.json
./1.prometheuse.json
./br-v6.5.10-linux-amd64.tar.gz
./root.json
./1.tidb.json
./1.tiup.json
./1.pd.json
./1.index.json
./1.root.json
./tikv-v6.5.0-linux-amd64.tar.gz
./pd-v6.5.0-linux-amd64.tar.gz
./cluster-v1.14.0-linux-amd64.tar.gz
./blackbox_exporter-v0.21.1-linux-amd64.tar.gz
./5.br.json
./br-v7.5.2-linux-amd64.tar.gz
./tiup-v1.14.0-linux-amd64.tar.gz
./1.node_exporter.json
./1.insight.json
./tiup-linux-amd64.tar.gz
./insight-v0.4.2-linux-amd64.tar.gz
./br-ee-v7.1.1-3-linux-amd64.tar.gz
./snapshot.json
./node_exporter-v1.3.1-linux-amd64.tar.gz
./tidb-v6.5.0-linux-amd64.tar.gz
./1.blackbox_exporter.json
./timestamp.json
Successfully set mirror to /home/tidb/tidb-repo
##### init temmeta mirror /home/tidb/tidb-repo finished #####
##### deploy_tiup /tem/run/.tem finished #####
##### install metadb started #####
Last login: Sat Oct 11 15:22:48 CST 2025
The component `prometheus` not found (may be deleted from repository); skipped
tiup is checking updates for component cluster ...
A new version of cluster is available:
   The latest version:         v1.14.0
   Local installed version:    
   Update current component:   tiup update cluster
   Update all components:      tiup update --all

The component `cluster` version  is not installed; downloading from repository.
Starting component `cluster`: /home/tidb/.temmeta/components/cluster/v1.14.0/tiup-cluster deploy tem_metadb v6.5.0 metadb_topology.yaml -u tidb -i /home/tidb/.ssh/id_rsa --yes

+ Detect CPU Arch Name
  - Detecting node 127.0.0.1 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 127.0.0.1 OS info ... Done
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v6.5.0 (linux/amd64) ... Done
  - Download tikv:v6.5.0 (linux/amd64) ... Done
  - Download tidb:v6.5.0 (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 127.0.0.1:22 ... Done
+ Deploy TiDB instance
  - Copy pd -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tidb -> 127.0.0.1 ... Done
  - Deploy node_exporter -> 127.0.0.1 ... Done
  - Deploy blackbox_exporter -> 127.0.0.1 ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> 127.0.0.1:2379 ... Done
  - Generate config tikv -> 127.0.0.1:20160 ... Done
  - Generate config tidb -> 127.0.0.1:4000 ... Done
+ Init monitor configs
  - Generate config node_exporter -> 127.0.0.1 ... Done
  - Generate config blackbox_exporter -> 127.0.0.1 ... Done
Enabling component pd
        Enabling instance 127.0.0.1:2379
        Enable instance 127.0.0.1:2379 success
Enabling component tikv
        Enabling instance 127.0.0.1:20160
        Enable instance 127.0.0.1:20160 success
Enabling component tidb
        Enabling instance 127.0.0.1:4000
        Enable instance 127.0.0.1:4000 success
Enabling component node_exporter
        Enabling instance 127.0.0.1
        Enable 127.0.0.1 success
Enabling component blackbox_exporter
        Enabling instance 127.0.0.1
        Enable 127.0.0.1 success
Cluster `tem_metadb` deployed successfully, you can start it with command: `tiup cluster start tem_metadb --init`
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.temmeta/components/cluster/v1.14.0/tiup-cluster start tem_metadb
Starting cluster tem_metadb...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.temmeta/storage/cluster/clusters/tem_metadb/ssh/id_rsa, publicKey=/home/tidb/.temmeta/storage/cluster/clusters/tem_metadb/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 127.0.0.1:2379
        Start instance 127.0.0.1:2379 success
Starting component tikv
        Starting instance 127.0.0.1:20160
        Start instance 127.0.0.1:20160 success
Starting component tidb
        Starting instance 127.0.0.1:4000
        Start instance 127.0.0.1:4000 success
Starting component node_exporter
        Starting instance 127.0.0.1
        Start 127.0.0.1 success
Starting component blackbox_exporter
        Starting instance 127.0.0.1
        Start 127.0.0.1 success
+ [ Serial ] - UpdateTopology: cluster=tem_metadb
Started cluster `tem_metadb` successfully
##### install metadb finished #####
##### check env started: after metadb installation #####
assistant check  {"level":"debug","msg":"ssh"}
strategyName Check 
success for 127.0.0.1:4000
success
NetWorkStatus Start: 1760167432
NetWorkStatus End  : 1760167435
Net Status , OK
SSHCheck 0.0.0.0 authorized_keys
id_rsa
id_rsa.pub
 
success
TEM assistant check {"level":"debug","msg":"ssh"} strategyName Check success for 127.0.0.1:4000 success NetWorkStatus Start: 1760167432 NetWorkStatus End : 1760167435 Net Status , OK SSHCheck 0.0.0.0 authorized_keys id_rsa id_rsa.pub success
##### check env finished: after metadb installation #####
##### generate config /home/tidb/config.yaml started #####
assistant run  {"level":"debug","msg":"ssh"}
strategyName Install 
success for 127.0.0.1:4000
success
success
TEM assistant run {"level":"debug","msg":"ssh"} strategyName Install success for 127.0.0.1:4000 success success
##### assistant run  /home/tidb/config.yaml {"level":"debug","msg":"ssh"}
strategyName Install 
success for 127.0.0.1:4000
success
success finished #####
Detected shell: bash
Shell profile:  /home/tidb/.bash_profile
Last login: Sat Oct 11 15:23:25 CST 2025
tiup is checking updates for component tem ...
Starting component `tem`: /tem/run/.tem/components/tem/v3.1.0/tiup-tem tls-gen tem-servers
TLS certificate for cluster tem-servers generated
tiup is checking updates for component tem ...
Starting component `tem`: /tem/run/.tem/components/tem/v3.1.0/tiup-tem deploy tem-servers v3.1.0 /home/tidb/config.yaml -u tidb -i /home/tidb/.ssh/id_rsa --yes
+ Generate SSH keys ... Done
+ Download components
  - Download tem-server:v3.1.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 0.0.0.0:22 ... Done
+ Copy files
  - Copy tem-server -> 0.0.0.0 ... Done
Enabling component tem-server
        Enabling instance 0.0.0.0:8080
        Enable instance 0.0.0.0:8080 success
Cluster `tem-servers` deployed successfully, you can start it with command: `TIUP_HOME=/home/<user>/.tem tiup tem start tem-servers`, where user is defined in config.yaml. by default: `TIUP_HOME=/home/tidb/.tem tiup tem start tem-servers`
tiup is checking updates for component tem ...
Starting component `tem`: /tem/run/.tem/components/tem/v3.1.0/tiup-tem start tem-servers
Starting cluster tem-servers...
+ [ Serial ] - SSHKeySet: privateKey=/tem/run/.tem/storage/tem/clusters/tem-servers/ssh/id_rsa, publicKey=/tem/run/.tem/storage/tem/clusters/tem-servers/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=0.0.0.0
+ [ Serial ] - StartCluster
Starting cluster ComponentName tem-server...
Starting component tem-server
        Starting instance 0.0.0.0:8080
        Start instance 0.0.0.0:8080 success
Started tem `tem-servers` successfully
tiup is checking updates for component tem ...
Starting component `tem`: /tem/run/.tem/components/tem/v3.1.0/tiup-tem display tem-servers
Cluster type:       tem
Cluster name:       tem-servers
Cluster version:    v3.1.0
Deploy user:        tidb
SSH type:           builtin
WebServer URL:      
ID            Role        Host     Ports  OS/Arch       Status  Data Dir                   Deploy Dir
--            ----        ----     -----  -------       ------  --------                   ----------
0.0.0.0:8080  tem-server  0.0.0.0  8080   linux/x86_64  Up      /tem/data/tem-server-8080  /tem/run/tem-server-8080
Total nodes: 1
/home/tidb/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /home/tidb/.bash_profile to use it
Installed path: /usr/local/bin/tiup
=====================================================================
TEM service has been deployed on host <ip addresses> successfully, please use below
      command check the status of TEM service:
1. Switch user:  su - tidb
2. source /home/tidb/.bash_profile
3. Have a try:   TIUP_HOME=/tem/run/.tem tiup tem display tem-servers
====================================================================

4、安装后检查

4.1 命令行检查

TIUP_HOME=/tem/run/.tem tiup tem display tem-servers 检查服务状态。

su - tidb

$ TIUP_HOME=/tem/run/.tem tiup tem display tem-servers
tiup is checking updates for component tem ...
Starting component `tem`: /tem/run/.tem/components/tem/v3.1.0/tiup-tem display tem-servers
Cluster type:       tem
Cluster name:       tem-servers
Cluster version:    v3.1.0
Deploy user:        tidb
SSH type:           builtin
WebServer URL:      
ID            Role        Host     Ports  OS/Arch       Status  Data Dir                   Deploy Dir
--            ----        ----     -----  -------       ------  --------                   ----------
0.0.0.0:8080  tem-server  0.0.0.0  8080   linux/x86_64  Up      /tem/data/tem-server-8080  /tem/run/tem-server-8080
Total nodes: 1

4.2 登录WEB 界面检查

http://<TEM 部署ip地址>:<port>/login

http://192.168.2.126:8080

TEM 默认⽤户为 admin, 默认密码为 admin

至此,TEM 的部署已经完成,真是顺利,真是简单方便。为 TIDB 点个赞。

三、平凯数据库敏捷模式:安装体验

使用 TEM 快速部署平凯数据库敏捷模式。

参考:【内测】平凯数据库敏捷模式 beta 版-部署指南及体验方向 - 飞书云文档

3.1 配置凭证

  1. 点击“设置 -> 凭证 -> 主机 -> 添加凭证”
  2. 填写被控主机/中控机的 ssh 登录凭证,点击“确认”添加
  3. 检查凭证是否添加成功

3.2 向 TEM 导入敏捷模式安装包

下载 amd 64. zip 导入 TEM 中。

3.2.1 解压敏捷模式安装包

# unzip amd64.zip
# ls -1 | grep gz
tidb-ee-server-v7.1.8-5.2-20250630-linux-amd64.tar.gz    # tidb服务整合包
tidb-ee-toolkit-v7.1.8-5.2-20250630-linux-amd64.tar.gz   # 相关工具整合包

3.2.2 导入敏捷模式安装包到 TEM

  1. 点击“设置 -> 组件管理 -> 添加组件”
  2. 选择“组件镜像”
  3. 选择本地上传(上传刚下载的平凯数据库敏捷模式安装包)

3.3 配置中控机

  1. 点击“主机 -> 集群管理中控机 -> 添加中控机”
  2. 填写中控机信息

3.4 配置集群主机

  1. 点击“主机 -> 主机 -> 添加共享主机”
  2. 填写主机信息,点击“预览”,预览无误后点击“确认添加”

3.5 创建集群

  1. 点击“集群 -> 创建集群”
  2. 设定集群基础配置     集群名称:自定义     Root 用户密码:该集群的数据库 Root 用户密码,后续会在集群内的“ SQL 编辑器”和“数据闪回”功能中用到,记得保存     CPU 架构:选择部署机器的芯片架构     部署用户:用于启动部署的集群的用户,若该字段指定的用户在目标机器上不存在,则会尝试自动创建     集群种类:选择敏捷模式     部署模式:共享     主机规格:默认 4 C / 8 G     
  3. 当添加完需要的组件后,点击“回到规划集群节点页面”按钮

注意: PingKaiDB Fusion:必须添加(节点配额限制为 10) Grafana:必须添加(才能使用监控功能) Prometheus 以及 Alertmanager:必须添加(才能使用告警功能) TiFlash:可选(如果需要测试平凯数据库敏捷模式的 HTAP 功能,需要添加) Pump 和 Drainer 组件:不建议添加

  1. 配置集群参数和告警   默认参数模版和告警模版即可,点击下一步

  2. 预览创建配置,确认无误后点击“创建”按钮启动创建任务

  3. 创建过程的具体日志可点击“查看详情”,或在“任务中心”中点击相应的任务进行查看

  4. 集群创建并纳管成功

3.6 调整下平凯数据库敏捷模式全局变量

完成平凯数据库敏捷模式部署后,在 TEM SQL 编辑器或使用 MySQL 客户端连接平凯数据库敏捷模式输入以下命令:

set global tidb_runtime_filter_mode=LOCAL;
set global tidb_opt_enable_mpp_shared_cte_execution=on;
set global tidb_rc_read_check_ts=on;
set global tidb_analyze_skip_column_types="json,blob,mediumblob,longblob,mediumtext,longtext";
set global tidb_enable_collect_execution_info=off;
set global tidb_enable_instance_plan_cache=on;
set global tidb_instance_plan_cache_max_size=2GiB;
set global tidbx_enable_tikv_local_call=on;
set global tidbx_enable_pd_local_call=on;
set global tidb_schema_cache_size=0;

-- 是否持久化到集群:否,仅作用于当前连接的 TiDB 实例
set global tidb_enable_slow_log=off;

至此,集群创建完毕

四、平凯数据库敏捷模式:基本功能体验

4.1 连接到tidb

通过 mysql 客户端连接

mysql --port 4000 --user xxx --password

4.2 添加用户

CREATE USER 'test'@'127.0.0.1' IDENTIFIED BY 'xxx';

4.3 授权

SHOW GRANTS FOR 'test'@'localhost';

4.4 删除用户

DROP USER 'test'@'localhost';

4.5 添加索引

alter table test add index idx_id (id);

4.6 删除索引

alter table test drop index idx_id;

五、平凯数据库敏捷模式:数据迁移体验

5.1 在中控机上安装 TiUP 组件

# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 4785k  100 4785k    0     0  5590k      0 --:--:-- --:--:-- --:--:-- 5597k
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

# 生效环境变量
source ~/.bashrc

5.2 安装 TiUP DM 组件

# tiup install dm dmctl
download https://tiup-mirrors.pingcap.com/dm-v1.16.3-linux-amd64.tar.gz 9.03 MiB / 9.03 MiB 100.00% 44.48 MiB/s                                                     
download https://tiup-mirrors.pingcap.com/dmctl-v8.5.3-linux-amd64.tar.gz 75.63 MiB / 75.63 MiB 100.00% 14.75 MiB/s   

5.3 创建数据源

首先,新建 source1.yaml 文件,写入以下内容:

# 唯一命名,不可重复。 
source-id: "mysql-01" 

# DM-worker 是否使用全局事务标识符 (GTID) 拉取 binlog。使用前提是上游 MySQL 已开启 GTID 模式。若上游存在主从自动切换,则必须使用 GTID 模式。 

enable-gtid: true 
from:
  host: "${host}" # 例如:172.16.10.81
  user: "root"
  password: "${password}" # 支持但不推荐使用明文密码,建议使用 dmctl encrypt 对明文密码进行加密后使用
  port: 3306

其次,在终端中执行下面的命令后,使用 tiup dmctl 将数据源配置加载到 DM 集群中:

tiup dmctl --master-addr ${advertise-addr} operate-source create source1.yaml

5.4 创建迁移任务

新建 task1.yaml 文件,写入以下内容:

# 任务名,多个同时运行的任务不能重名。
name: "test"
# 任务模式,可设为
# full:只进行全量数据迁移
# incremental: binlog 实时同步
# all: 全量 + binlog 迁移
task-mode: "all"
# 下游 TiDB 配置信息。
target-database:
  host: "${host}"                   # 例如:172.16.10.83
  port: 4000
  user: "root"
  password: "${password}"           # 支持但不推荐使用明文密码,建议使用 dmctl encrypt 对明文密码进行加密后使用

# 当前数据迁移任务需要的全部上游 MySQL 实例配置。
mysql-instances:
-
  # 上游实例或者复制组 ID。
  source-id: "mysql-01"
  # 需要迁移的库名或表名的黑白名单的配置项名称,用于引用全局的黑白名单配置,全局配置见下面的 `block-allow-list` 的配置。
  block-allow-list: "listA"


# 黑白名单全局配置,各实例通过配置项名引用。
block-allow-list:
  listA:                              # 名称
    do-tables:                        # 需要迁移的上游表的白名单。
    - db-name: "test_db"              # 需要迁移的表的库名。
      tbl-name: "test_table"          # 需要迁移的表的名称。

5.5 启动任务


# 检查
tiup dmctl --master-addr ${advertise-addr} check-task task.yaml

# 启动
tiup dmctl --master-addr ${advertise-addr} start-task task.yaml

5.6 查看任务状态

tiup dmctl --master-addr ${advertise-addr} query-status ${task-name}

平凯数据库迁移工具,使数据迁移变的更加容易可控。极大的简化了DBA的工作。

​六、TEM 易用性

TEM提供图形化的WEB管理界面,可以让用户在一个 Web 界面内即可完成:

1、TiDB 集群部署/纳管/升级

2、参数配置

3、节点扩缩容

4、综合监控、告警

5、自动化备份策略

6、故障自愈与性能诊断、任务定时执行、服务器/集群 CPU、内存、磁盘 I/O 等资源利用实时可视化

7、实现数据库统一资源池建设,彻底告别多集群、多组件来回切换及复杂命令行,让大规模、多集群场景下的运维更简单、更安全、更自动化

6.1 监控

可以监控:数据库时间、应用链接、SQL 负载、耗时分解、TiDB 组件资源、集群主机资源等。

image.png

6.2 诊断

可以统计出慢 SQL、可以对 SQL 语句进行分析、审计。

6.3 告警

告警功能也是很完善的。 自己自定义创建告警规则和告警通道,以便监控。

image.png

image.png

6.4 巡检

内置的巡检项目,也是比较完善的,可以执行日常巡检任务。

image.png

6.5 备份

可以实现对数据库、对表级的备份。 备份介质可以选择 S 3 对象存储及 NFS(NAS)备份。

恢复功能也完善,可以对整个集群实现恢复,也可以恢复到指点时间点,也可以指定库、指定表进行恢复。

6.6 集群信息

可以查看集群的实例信息、主机资源、磁盘次元、存储拓扑等等。

image.png

总之,TEM 还有好多功能,大家可以尽情挖掘体验,解放 DBA 的运维之手。

同时,Dashboard也提供了很多图形化的管理功能(比如集群诊断、日志搜索分析、资源管控、高级调试等),非常之好用,大家可以试用体验一下。

1
1
1
0

版权声明:本文为 TiDB 社区用户原创文章,遵循 CC BY-NC-SA 4.0 版权协议,转载请附上原文出处链接和本声明。

评论
暂无评论