3
4
3
2
专栏/.../

tidb-v7.4初体验

 哈喽沃德  发表于  2024-01-08

一、背景

随着公司业务的日益增长,以及国产化数据库的发展趋势,公司开始着手进行数据库国产化选型,主要考虑如下几个因素:

  1. 数据安全和合规性:在当前信息安全意识增强的环境下,公司对数据安全和合规性的要求日益提高。因此,选择一个符合国家相关法律法规要求的国产数据库可以降低数据泄露和违规风险。
  2. 数据性能和扩展性:随着业务的发展,公司的数据量和访问量可能会不断增加,因此需要一个具备良好性能和扩展性的数据库,以满足高并发访问、大规模存储和复杂查询等需求。
  3. 技术支持和生态系统:选择一个有稳定可靠的技术支持团队和完善的生态系统的国产数据库,可以提供及时的技术支持和解决方案,帮助公司快速响应和解决问题。
  4. 成本效益:作为一个企业,成本效益是选择国产数据库的重要考虑因素之一。相较于国外商业数据库,国产数据库可能拥有更具竞争力的价格,并且节约了跨境购买和维护成本。

综上所述,公司在进行数据库国产化选型时,需要考虑数据安全、性能、技术支持、成本效益等因素,以确保选择一个符合需求并具备竞争力的国产数据库。经过一段时间对TIDB的了解,刚好条件都符合,所以进行一下初步体验,本次主要测试playground方式运行和单机部署集群方式进行初步部署测试。

二、在线安装TiUP

1、设置环境变量,如果不修改默认会安装到/root/.tiup [root@dmdca /]# export TIUP_HOME=/data [root@dmdca /]# echo $TIUP_HOME /data

2、执行如下命令安装 TiUP 工具:

[root@dmdca /]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7385k 100 7385k 0 0 7363k 0 0:00:01 0:00:01 --:--:-- 7363k WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json You can revoke this by remove /data/bin/7b8e153f2e2d0928.root.json Successfully set mirror to https://tiup-mirrors.pingcap.com Detected shell: bash Shell profile: /root/.bash_profile /root/.bash_profile has been modified to add tiup to PATH open a new terminal or source /root/.bash_profile to use it Installed path: /data/bin/tiup

3、按如下步骤设置 TiUP 环境变量: [root@dmdca /]# source .bash_profile -bash: .bash_profile: 没有那个文件或目录 [root@dmdca /]# source /root/.bash_profile

4、确认 TiUP 工具是否安装: [root@dmdca /]# which tiup /data/bin/tiup

5、安装 TiUP cluster 组件: [root@dmdca /]# tiup cluster tiup is checking updates for component cluster ... A new version of cluster is available: The latest version: v1.13.1 Local installed version:Update current component: tiup update cluster Update all components: tiup update --all

The component cluster version is not installed; downloading from repository. download https://tiup-mirrors.pingcap.com/cluster-v1.13.1-linux-amd64.tar.gz 8.74 MiB / 8.74 MiB 100.00% 15.06 MiB/sStarting component cluster: /data/components/cluster/v1.13.1/tiup-cluster Deploy a TiDB cluster for production

Usage: tiup cluster [command]

Available Commands: check Perform preflight checks for the cluster. deploy Deploy a cluster for production start Start a TiDB cluster stop Stop a TiDB cluster restart Restart a TiDB cluster scale-in Scale in a TiDB cluster scale-out Scale out a TiDB cluster destroy Destroy a specified cluster clean (EXPERIMENTAL) Cleanup a specified cluster upgrade Upgrade a specified TiDB cluster display Display information of a TiDB cluster prune Destroy and remove instances that is in tombstone state list List all clusters audit Show audit log of cluster operation import Import an exist TiDB cluster from TiDB-Ansible edit-config Edit TiDB cluster config show-config Show TiDB cluster config reload Reload a TiDB cluster's config and restart if needed patch Replace the remote package with a specified package and restart the service rename Rename the cluster enable Enable a TiDB cluster automatically at boot disable Disable automatic enabling of TiDB clusters at boot replay Replay previous operation and skip successed steps template Print topology template tls Enable/Disable TLS between TiDB components meta backup/restore meta information rotatessh rotate ssh keys on all nodes help Help about any command completion Generate the autocompletion script for the specified shell

Flags: -c, --concurrency int max number of parallel tasks allowed (default 5) --format string (EXPERIMENTAL) The format of output, available values are [default, json] (default "default") -h, --help help for tiup --ssh string (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'. --ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5) -v, --version version for tiup --wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120) -y, --yes Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.

6、如果已经安装,则更新 TiUP cluster 组件至最新版本:

tiup update --self && tiup update cluster 预期输出 “Update successfully!” 字样。

验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本: [root@dmdca /]# tiup --binary cluster /data/components/cluster/v1.13.1/tiup-cluster [root@dmdca /]#

三、playground方式运行

以这种方式执行的 playground,在结束部署测试后 TiUP 会清理掉原集群数据,重新执行该命令后会得到一个全新的集群。 若希望持久化数据,可以执行 TiUP 的 --tag 参数:tiup --tag playground 1、直接运行 tiup playground 命令会运行最新版本的 TiDB 集群,其中 TiDB、TiKV、PD 和 TiFlash 实例各 1 个: tiup playground

2、也可以指定 TiDB 版本以及各组件实例个数,命令类似于: tiup playground v7.1.1 --db 2 --pd 3 --kv 3

3、指定host:不指定host默认只能本机127.0.0.1访问,指定0.0.0.0没有限制。 tiup playground --host 0.0.0.0

[root@dmdca /]# tiup playground --host 0.0.0.0tiup is checking updates for component playground ... Starting component playground: /data/components/playground/v1.13.1/tiup-playground --host 0.0.0.0 Using the version v7.4.0 for version constraint "".

If you'd like to use a TiDB version other than v7.4.0, cancel and retry with the following arguments: Specify version manually: tiup playground Specify version range: tiup playground ^5 The nightly version: tiup playground nightly

Start pd instance:v7.4.0 Start tikv instance:v7.4.0 Start tidb instance:v7.4.0 Waiting for tidb instances ready 172.16.60.94:4000 ... Done Start tiflash instance:v7.4.0 tiflash quit: signal: segmentation fault Waiting for tiflash instances ready 172.16.60.94:3930 ... Error

🎉 TiDB Playground Cluster is started, enjoy!

Connect TiDB: mysql --comments --host 172.16.60.94 --port 4000 -u root TiDB Dashboard: http://172.16.60.94:2379/dashboard Grafana: http://0.0.0.0:3000

4、新开启一个 session 以访问 TiDB 数据库。 使用 TiUP client 连接 TiDB: tiup client

[root@dmdca bin]# tiup clientPlease check for root manifest file, you may download one from the repository mirror, or try tiup mirror set to force reset it. Error: initial repository from mirror(https://tiup-mirrors.pingcap.com/) failed: error loading manifest root.json: open /root/.tiup/bin/root.json: no such file or directory [root@dmdca bin]# cd [root@dmdca ~]# echo $TIUP_HOME

[root@dmdca ~]# pwd /root [root@dmdca ~]# vi .bash_profile 增加环境变量:export TIUP_HOME=/data,修改后配置如下:

.bash_profile

Get the aliases and functions

if [ -f ~/.bashrc ]; then . ~/.bashrc fi

User specific environment and startup programs

export TIUP_HOME=/data PATH=$PATH:$HOME/bin

export PATH

export PATH=/data/bin:$PATH

[root@dmdca ~]# source .bash_profile [root@dmdca ~]# tiup client tiup is checking updates for component client ... A new version of client is available: The latest version: v1.13.1 Local installed version:Update current component: tiup update client Update all components: tiup update --all

The component client version is not installed; downloading from repository. download https://tiup-mirrors.pingcap.com/client-v1.13.1-linux-amd64.tar.gz 4.81 MiB / 4.81 MiB 100.00% 19.43 MiB/sStarting component client: /data/components/client/v1.13.1/tiup-client Connected with driver mysql (8.0.11-TiDB-v7.4.0) Type "help" for help.

my:root@172.16.60.94:4000=> show databases; Database

INFORMATION_SCHEMA METRICS_SCHEMA PERFORMANCE_SCHEMA mysql test (6 rows) my:root@172.16.60.94:4000=> use test; USE my:root@172.16.60.94:4000=> show tables; Tables_in_test

t_test (1 row)

my:root@172.16.60.94:4000=> select * from t_test; f_id | f_name------+----------------- 1 | 测试修改内容 2 | 测试修改内容 4 | test中文测试123 5 | test中文测试123 6 | test中文测试123 (5 rows)

my:root@172.16.60.94:4000=>my:root@172.16.60.94:4000=> quit

也可使用 MySQL 客户端连接 TiDB: mysql --host 127.0.0.1 --port 4000 -u root

[root@dmdca ~]# mysql --host 127.0.0.1 --port 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 845152332 Server version: 8.0.11-TiDB-v7.4.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | INFORMATION_SCHEMA | | METRICS_SCHEMA | | PERFORMANCE_SCHEMA | | mysql | | test | +--------------------+ 6 rows in set (0.024 sec)

MySQL [(none)]> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A

Database changed MySQL [test]> show tables; +------------------+ | Tables_in_hdtydb | +------------------+ | t_test | +------------------+ 1 row in set (0.001 sec)

MySQL [test]> select * from t_test; +------+---------------------+ | f_id | f_name | +------+---------------------+ | 1 | 测试修改内容 | | 2 | 测试修改内容 | | 4 | test中文测试123 | | 5 | test中文测试123 | | 6 | test中文测试123 | +------+---------------------+ 5 rows in set (0.003 sec)

MySQL [hdtydb]> MySQL [hdtydb]> quit Bye [root@dmdca ~]#

5、通过 http://127.0.0.1:9090 访问 TiDB 的 Prometheus 管理界面。(这里要输入操作系统的用户及密码。)

6、通过 http://127.0.0.1:2379/dashboard 访问 TiDB Dashboard 页面,默认用户名为 root,密码为空。

7、通过 http://127.0.0.1:3000 访问 TiDB 的 Grafana 界面,默认用户名和密码都为 admin。

8、(可选)将数据加载到 TiFlash 进行分析。

9、测试完成之后,可以通过执行以下步骤来清理集群: 按下 Control+C 键停掉上述启用的 TiDB 服务。 等待服务退出操作完成后,执行以下命令: tiup clean --all

四、在单机上模拟部署生产环境集群

1、适用场景:希望用单台 Linux 服务器,体验 TiDB 最小的完整拓扑的集群,并模拟生产环境下的部署步骤。 本节介绍如何参照 TiUP 最小拓扑的一个 YAML 文件部署 TiDB 集群。

2、准备环境 开始部署 TiDB 集群前,准备一台部署主机,确保其软件满足需求:

推荐安装 CentOS 7.3 及以上版本 运行环境可以支持互联网访问,用于下载 TiDB 及相关软件安装包 最小规模的 TiDB 集群拓扑包含以下实例:

实例 个数 IP 配置 TiKV 3 172.16.60.94 避免端口和目录冲突 172.16.60.94 避免端口和目录冲突 172.16.60.94 避免端口和目录冲突 TiDB 1 172.16.60.94 默认端口、全局目录配置 PD 1 172.16.60.94 默认端口、全局目录配置 TiFlash 1 172.16.60.94 默认端口、全局目录配置 Monitor 1 172.16.60.94 默认端口、全局目录配置

3、部署主机软件和环境要求如下:

部署需要使用部署主机的 root 用户及密码 部署主机关闭防火墙或者开放 TiDB 集群的节点间所需端口 目前 TiUP Cluster 支持在 x86_64(AMD64)和 ARM 架构上部署 TiDB 集群 在 AMD64 架构下,建议使用 CentOS 7.3 及以上版本 Linux 操作系统 在 ARM 架构下,建议使用 CentOS 7.6 1810 版本 Linux 操作系统

4、实施部署 注意 你可以使用 Linux 系统的任一普通用户或 root 用户登录主机,以下步骤以 root 用户为例。

下载并安装 TiUP: curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh 声明全局环境变量: 注意 TiUP 安装完成后会提示对应 Shell profile 文件的绝对路径。在执行以下 source 命令前,需要将 ${your_shell_profile} 修改为 Shell profile 文件的实际位置。 source ${your_shell_profile} 安装 TiUP 的 cluster 组件: tiup cluster 如果机器已经安装 TiUP cluster,需要更新软件版本: tiup update --self && tiup update cluster 由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制: 修改 /etc/ssh/sshd_config 将 MaxSessions 调至 20。 重启 sshd 服务: service sshd restart 创建并启动集群 按下面的配置模板,编辑配置文件,命名为 topo.yaml,其中: user: "tidb":表示通过 tidb 系统用户(部署会自动创建)来做集群的内部管理,默认使用 22 端口通过 ssh 登录目标机器 replication.enable-placement-rules:设置这个 PD 参数来确保 TiFlash 正常运行 host:设置为本部署主机的 IP

配置模板如下:

# Global variables are applied to all deployments and used as the default value of

# the deployments if a specific deployment value is missing.

global: user: "tidb" ssh_port: 22 deploy_dir: "/data/tidb-deploy" data_dir: "/data/tidb-data"

# Monitored variables are applied to all the machines.

monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115

server_configs: tidb: instance.tidb_slow_log_threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info"

pd_servers:

  • host: 172.16.60.94

tidb_servers:

  • host: 172.16.60.94

tikv_servers:

  • host: 172.16.60.94 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" }

  • host: 172.16.60.94 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" }

  • host: 172.16.60.94 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" }

tiflash_servers:

  • host: 172.16.60.94

monitoring_servers:

  • host: 172.16.60.94

grafana_servers:

  • host: 172.16.60.94 执行集群部署命令:

tiup cluster deploy ./topo.yaml --user root -p 参数 表示设置集群名称

参数 表示设置集群版本,例如 v7.1.1。可以通过 tiup list tidb 命令来查看当前支持部署的 TiDB 版本

参数 -p 表示在连接目标机器时使用密码登录

注意 如果主机通过密钥进行 SSH 认证,请使用 -i 参数指定密钥文件路径,-i 与 -p 不可同时使用。

按照引导,输入”y”及 root 密码,来完成部署:

Do you want to continue? [y/N]: y Input SSH password:

[root@dmdca /]# pwd / [root@dmdca /]# vi topo.yaml [root@dmdca /]# tiup cluster deploy tidb-cluster v7.4.0 topo.yaml --user root -ptiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster deploy tidb-cluster v7.4.0 topo.yaml --user root -p Input SSH password:

  • Detect CPU Arch Name

    • Detecting node 172.16.60.94 Arch info ... Done
  • Detect CPU OS Name

    • Detecting node 172.16.60.94 OS info ... Done Please confirm your topology: Cluster type: tidb Cluster name: tidb-cluster Cluster version: v7.4.0 Role Host Ports OS/Arch Directories

pd 172.16.60.94 2379/2380 linux/x86_64 /data/tidb-deploy/pd-2379,/data/tidb-data/pd-2379 tikv 172.16.60.94 20160/20180 linux/x86_64 /data/tidb-deploy/tikv-20160,/data/tidb-data/tikv-20160 tikv 172.16.60.94 20161/20181 linux/x86_64 /data/tidb-deploy/tikv-20161,/data/tidb-data/tikv-20161 tikv 172.16.60.94 20162/20182 linux/x86_64 /data/tidb-deploy/tikv-20162,/data/tidb-data/tikv-20162 tidb 172.16.60.94 4000/10080 linux/x86_64 /data/tidb-deploy/tidb-4000 tiflash 172.16.60.94 9000/8123/3930/20170/20292/8234 linux/x86_64 /data/tidb-deploy/tiflash-9000,/data/tidb-data/tiflash-9000 prometheus 172.16.60.94 9090/12020 linux/x86_64 /data/tidb-deploy/prometheus-9090,/data/tidb-data/prometheus-9090 grafana 172.16.60.94 3000 linux/x86_64 /data/tidb-deploy/grafana-3000 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: (default=N) y

  • Generate SSH keys ... Done
  • Download TiDB components
    • Download pd:v7.4.0 (linux/amd64) ... Done
    • Download tikv:v7.4.0 (linux/amd64) ... Done
    • Download tidb:v7.4.0 (linux/amd64) ... Done
    • Download tiflash:v7.4.0 (linux/amd64) ... Done
    • Download prometheus:v7.4.0 (linux/amd64) ... Done
    • Download grafana:v7.4.0 (linux/amd64) ... Done
    • Download node_exporter: (linux/amd64) ... Done
    • Download blackbox_exporter: (linux/amd64) ... Done
  • Initialize target host environments
    • Prepare 172.16.60.94:22 ... Done
  • Deploy TiDB instance
    • Copy pd -> 172.16.60.94 ... Done
    • Copy tikv -> 172.16.60.94 ... Done
    • Copy tikv -> 172.16.60.94 ... Done
    • Copy tikv -> 172.16.60.94 ... Done
    • Copy tidb -> 172.16.60.94 ... Done
    • Copy tiflash -> 172.16.60.94 ... Done
    • Copy prometheus -> 172.16.60.94 ... Done
    • Copy grafana -> 172.16.60.94 ... Done
    • Deploy node_exporter -> 172.16.60.94 ... Done
    • Deploy blackbox_exporter -> 172.16.60.94 ... Done
  • Copy certificate to remote host
  • Init instance configs
    • Generate config pd -> 172.16.60.94:2379 ... Done
    • Generate config tikv -> 172.16.60.94:20160 ... Done
    • Generate config tikv -> 172.16.60.94:20161 ... Done
    • Generate config tikv -> 172.16.60.94:20162 ... Done
    • Generate config tidb -> 172.16.60.94:4000 ... Done
    • Generate config tiflash -> 172.16.60.94:9000 ... Done
    • Generate config prometheus -> 172.16.60.94:9090 ... Done
    • Generate config grafana -> 172.16.60.94:3000 ... Done
  • Init monitor configs
    • Generate config node_exporter -> 172.16.60.94 ... Done
    • Generate config blackbox_exporter -> 172.16.60.94 ... Done Enabling component pd Enabling instance 172.16.60.94:2379 Enable instance 172.16.60.94:2379 success Enabling component tikv Enabling instance 172.16.60.94:20162 Enabling instance 172.16.60.94:20160 Enabling instance 172.16.60.94:20161 Enable instance 172.16.60.94:20160 success Enable instance 172.16.60.94:20161 success Enable instance 172.16.60.94:20162 success Enabling component tidb Enabling instance 172.16.60.94:4000 Enable instance 172.16.60.94:4000 success Enabling component tiflash Enabling instance 172.16.60.94:9000 Enable instance 172.16.60.94:9000 success Enabling component prometheus Enabling instance 172.16.60.94:9090 Enable instance 172.16.60.94:9090 success Enabling component grafana Enabling instance 172.16.60.94:3000 Enable instance 172.16.60.94:3000 success Enabling component node_exporter Enabling instance 172.16.60.94 Enable 172.16.60.94 success Enabling component blackbox_exporter Enabling instance 172.16.60.94 Enable 172.16.60.94 success Cluster tidb-cluster deployed successfully, you can start it with command: tiup cluster start tidb-cluster --init

启动集群:

tiup cluster start

[root@dmdca log]# tiup cluster start tidb-cluster tiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster start tidb-cluster Starting cluster tidb-cluster...

  • [ Serial ] - SSHKeySet: privateKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [ Serial ] - StartCluster Starting component pd Starting instance 172.16.60.94:2379 Start instance 172.16.60.94:2379 success Starting component tikv Starting instance 172.16.60.94:20160 Starting instance 172.16.60.94:20162 Starting instance 172.16.60.94:20161 Start instance 172.16.60.94:20160 success Start instance 172.16.60.94:20161 success Start instance 172.16.60.94:20162 success Starting component tidb Starting instance 172.16.60.94:4000 Start instance 172.16.60.94:4000 success Starting component tiflash Starting instance 172.16.60.94:9000 Start instance 172.16.60.94:9000 success Starting component prometheus Starting instance 172.16.60.94:9090 Start instance 172.16.60.94:9090 success Starting component grafana Starting instance 172.16.60.94:3000 Start instance 172.16.60.94:3000 success Starting component node_exporter Starting instance 172.16.60.94 Start 172.16.60.94 success Starting component blackbox_exporter Starting instance 172.16.60.94 Start 172.16.60.94 success
  • [ Serial ] - UpdateTopology: cluster=tidb-cluster Started cluster tidb-cluster successfully [root@dmdca log]#

启动过程中有如下报错,多启动几次即可,可能是内存太小缘故,一共内存才4G,swap使用了6G,可用剩余内存只有600M. [root@dmdca log]# free -m total used free shared buff/cache available Mem: 4675 3904 125 50 645 437 Swap: 8191 6571 1620 [root@dmdca log]# free -g total used free shared buff/cache available Mem: 4 3 0 0 0 0 Swap: 7 6 1 [root@dmdca log]#

[root@dmdca /]# tiup cluster start tidb-cluster tiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster start tidb-cluster Starting cluster tidb-cluster...

  • [ Serial ] - SSHKeySet: privateKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [ Serial ] - StartCluster Starting component pd Starting instance 172.16.60.94:2379 Start instance 172.16.60.94:2379 success Starting component tikv Starting instance 172.16.60.94:20162 Starting instance 172.16.60.94:20160 Starting instance 172.16.60.94:20161 Start instance 172.16.60.94:20160 success Start instance 172.16.60.94:20162 success

Error: failed to start tikv: failed to start: 172.16.60.94 tikv-20161.service, please check the instance's log(/data/tidb-deploy/tikv-20161/log) for more detail.: timed out waiting for port 20161 to be started after 2m0s

Verbose debug logs has been written to /data/logs/tiup-cluster-debug-2023-10-24-12-28-12.log.

[root@dmdca log]# tiup cluster start tidb-clustertiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster start tidb-cluster Starting cluster tidb-cluster...

  • [ Serial ] - SSHKeySet: privateKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [Parallel] - UserSSH: user=tidb, host=172.16.60.94
  • [ Serial ] - StartCluster Starting component pd Starting instance 172.16.60.94:2379 Start instance 172.16.60.94:2379 success Starting component tikv Starting instance 172.16.60.94:20162 Starting instance 172.16.60.94:20160 Starting instance 172.16.60.94:20161 Start instance 172.16.60.94:20160 success Start instance 172.16.60.94:20162 success Start instance 172.16.60.94:20161 success Starting component tidb Starting instance 172.16.60.94:4000 Start instance 172.16.60.94:4000 success Starting component tiflash Starting instance 172.16.60.94:9000 Start instance 172.16.60.94:9000 success Starting component prometheus Starting instance 172.16.60.94:9090 Start instance 172.16.60.94:9090 success Starting component grafana Starting instance 172.16.60.94:3000

Error: failed to start grafana: failed to start: 172.16.60.94 grafana-3000.service, please check the instance's log(/data/tidb-deploy/grafana-3000/log) for more detail.: executor.ssh.execute_failed: Failed to execute command over SSH for 'tidb@172.16.60.94:22' {ssh_stderr: , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c "systemctl daemon-reload && systemctl start grafana-3000.service"}, cause: Run Command Timeout

Verbose debug logs has been written to /data/logs/tiup-cluster-debug-2023-10-24-14-14-31.log.

访问集群:

安装 MySQL 客户端。如果已安装 MySQL 客户端则可跳过这一步骤。 yum -y install mysql 访问 TiDB 数据库,密码为空: mysql -h 172.16.60.94 -P 4000 -u root [root@dmdca ~]# mysql -h 172.16.60.94 -P 4000 -u rootWelcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 2401239046 Server version: 8.0.11-TiDB-v7.4.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | INFORMATION_SCHEMA | | METRICS_SCHEMA | | PERFORMANCE_SCHEMA | | mysql | | test | +--------------------+ 5 rows in set (0.001 sec)

MySQL [(none)]>

访问 TiDB 的 Grafana 监控: 通过 http://{grafana-ip}:3000 访问集群 Grafana 监控页面,默认用户名和密码均为 admin。 http://172.16.60.94:3000/login

image.png

访问 TiDB 的 Dashboard: 通过 http://{pd-ip}:2379/dashboard 访问集群 TiDB Dashboard 监控页面,默认用户名为 root,密码为空。 http://172.16.60.94:2379/dashboard/#/signin

image.png

通过 http://172.16.60.94:9090 访问 TiDB 的 Prometheus 管理界面。(这里要输入操作系统的用户及密码。)

image.png

执行以下命令确认当前已经部署的集群列表: tiup cluster list

[root@dmdca ~]# tiup cluster list tiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster list Name User Version Path PrivateKey


tidb-cluster tidb v7.4.0 /data/storage/cluster/clusters/tidb-cluster /data/storage/cluster/clusters/tidb-cluster/ssh/id_rsa [root@dmdca ~]#

执行以下命令查看集群的拓扑结构和状态: tiup cluster display

[root@dmdca ~]# tiup cluster display tidb-cluster tiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster display tidb-cluster Cluster type: tidb Cluster name: tidb-cluster Cluster version: v7.4.0 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.60.94:2379/dashboard Grafana URL: http://172.16.60.94:3000 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir


172.16.60.94:3000 grafana 172.16.60.94 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000 172.16.60.94:2379 pd 172.16.60.94 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 172.16.60.94:9090 prometheus 172.16.60.94 9090/12020 linux/x86_64 Down /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090 172.16.60.94:4000 tidb 172.16.60.94 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 172.16.60.94:9000 tiflash 172.16.60.94 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000 172.16.60.94:20160 tikv 172.16.60.94 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 172.16.60.94:20161 tikv 172.16.60.94 20161/20181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161 172.16.60.94:20162 tikv 172.16.60.94 20162/20182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162 Total nodes: 8 [root@dmdca ~]#

发现prometheus状态为Down 尝试以下方式启动均无效: [root@dmdca log]# tiup cluster start tidb-cluster[root@dmdca log]# tiup cluster start tidb-cluster -R prometheus [root@dmdca log]# tiup cluster start tidb-cluster -R pd [root@dmdca log]# tiup cluster restart tidb-cluster

[root@dmdca log]# pwd /data/tidb-deploy/prometheus-9090/log [root@dmdca log]# [root@dmdca log]# ll 总用量 38364 -rw-r--r-- 1 tidb tidb 3885 10月 25 08:53 docdb.log -rw-r--r-- 1 tidb tidb 1191767 10月 25 11:02 ng.log -rw-r--r-- 1 tidb tidb 5467040 10月 25 11:03 prometheus.log -rw-r--r-- 1 tidb tidb 0 10月 24 14:09 service.log -rw-r--r-- 1 tidb tidb 32573530 10月 25 11:02 tsdb.log [root@dmdca log]# tail -f prometheus.log level=info ts=2023-10-25T03:02:46.353Z caller=web.go:540 component=web msg="Start listening for connections" address=:9090 level=error ts=2023-10-25T03:02:46.353Z caller=main.go:632 msg="Unable to start web listener" err="listen tcp :9090: bind: address already in use" level=warn ts=2023-10-25T03:03:01.586Z caller=main.go:377 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead." level=info ts=2023-10-25T03:03:01.586Z caller=main.go:426 msg="Starting Prometheus" version="(version=2.27.1, branch=HEAD, revision=db7f0bcec27bd8aeebad6b08ac849516efa9ae02)" level=info ts=2023-10-25T03:03:01.586Z caller=main.go:431 build_context="(go=go1.16.4, user=root@fd804fbd4f25, date=20210518-14:17:54)" level=info ts=2023-10-25T03:03:01.586Z caller=main.go:432 host_details="(Linux 4.19.90-24.4.v2101.ky10.x86_64 #1 SMP Mon May 24 12:14:55 CST 2021 x86_64 dmdca (none))" level=info ts=2023-10-25T03:03:01.586Z caller=main.go:433 fd_limits="(soft=1000000, hard=1000000)" level=info ts=2023-10-25T03:03:01.587Z caller=main.go:434 vm_limits="(soft=unlimited, hard=unlimited)" level=info ts=2023-10-25T03:03:01.589Z caller=web.go:540 component=web msg="Start listening for connections" address=:9090 level=error ts=2023-10-25T03:03:01.590Z caller=main.go:632 msg="Unable to start web listener" err="listen tcp :9090: bind: address already in use" level=warn ts=2023-10-25T03:03:16.835Z caller=main.go:377 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead." level=info ts=2023-10-25T03:03:16.835Z caller=main.go:426 msg="Starting Prometheus" version="(version=2.27.1, branch=HEAD, revision=db7f0bcec27bd8aeebad6b08ac849516efa9ae02)" level=info ts=2023-10-25T03:03:16.835Z caller=main.go:431 build_context="(go=go1.16.4, user=root@fd804fbd4f25, date=20210518-14:17:54)" level=info ts=2023-10-25T03:03:16.835Z caller=main.go:432 host_details="(Linux 4.19.90-24.4.v2101.ky10.x86_64 #1 SMP Mon May 24 12:14:55 CST 2021 x86_64 dmdca (none))" level=info ts=2023-10-25T03:03:16.835Z caller=main.go:433 fd_limits="(soft=1000000, hard=1000000)" level=info ts=2023-10-25T03:03:16.835Z caller=main.go:434 vm_limits="(soft=unlimited, hard=unlimited)" level=info ts=2023-10-25T03:03:16.838Z caller=web.go:540 component=web msg="Start listening for connections" address=:9090 level=error ts=2023-10-25T03:03:16.838Z caller=main.go:632 msg="Unable to start web listener" err="listen tcp :9090: bind: address already in use"

查看端口使用情况: [root@dmdca log]# netstat -tlnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 127.0.0.1:46667 0.0.0.0:* LISTEN 7030/cockpit-bridge tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 966/rpcbindtcp 0 0 0.0.0.0:20180 0.0.0.0:* LISTEN 38741/bin/tikv-serv tcp 0 0 0.0.0.0:20181 0.0.0.0:* LISTEN 38743/bin/tikv-serv tcp 0 0 0.0.0.0:20182 0.0.0.0:* LISTEN 38744/bin/tikv-serv tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1475/sshd: /usr/sbi tcp 0 0 127.0.0.1:34555 0.0.0.0:* LISTEN 37997/bin/pd-server tcp 0 0 127.0.0.1:40091 0.0.0.0:* LISTEN 37997/bin/pd-server tcp 0 0 0.0.0.0:20292 0.0.0.0:* LISTEN 43577/bin/tiflash/t tcp 0 0 127.0.0.1:39945 0.0.0.0:* LISTEN 6966/cockpit-bridge tcp 0 0 0.0.0.0:8234 0.0.0.0:* LISTEN 43577/bin/tiflash/t tcp6 0 0 :::2379 :::* LISTEN 37997/bin/pd-server tcp6 0 0 :::9100 :::* LISTEN 46643/bin/node_expo tcp6 0 0 :::2380 :::* LISTEN 37997/bin/pd-server tcp6 0 0 :::4236 :::* LISTEN 1643/dmaptcp6 0 0 :::111 :::* LISTEN 966/rpcbindtcp6 0 0 :::5236 :::* LISTEN 1642/dmservertcp6 0 0 :::5237 :::* LISTEN 1646/dmservertcp6 0 0 :::22 :::* LISTEN 1475/sshd: /usr/sbi tcp6 0 0 :::3000 :::* LISTEN 45787/bin/bin/grafa tcp6 0 0 172.16.60.94:3930 :::* LISTEN 43577/bin/tiflash/t tcp6 0 0 :::9115 :::* LISTEN 47105/bin/blackbox_ tcp6 0 0 :::10080 :::* LISTEN 42543/bin/tidb-serv tcp6 0 0 :::4000 :::* LISTEN 42543/bin/tidb-serv tcp6 0 0 :::20160 :::* LISTEN 38741/bin/tikv-serv tcp6 0 0 :::20161 :::* LISTEN 38743/bin/tikv-serv tcp6 0 0 :::9090 :::* LISTEN 1/systemdtcp6 0 0 :::20162 :::* LISTEN 38744/bin/tikv-serv tcp6 0 0 :::20170 :::* LISTEN 43577/bin/tiflash/t

[root@dmdca log]# ss -tlnp State Recv-Q Send-Q Local Address:Port Peer Address:Port ProcessLISTEN 0 64 127.0.0.1:46667 0.0.0.0:* users:(("cockpit-bridge",pid=7030,fd=12))LISTEN 0 128 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=966,fd=7))LISTEN 0 128 0.0.0.0:20180 0.0.0.0:* users:(("tikv-server",pid=38741,fd=164))LISTEN 0 128 0.0.0.0:20181 0.0.0.0:* users:(("tikv-server",pid=38743,fd=161))LISTEN 0 128 0.0.0.0:20182 0.0.0.0:* users:(("tikv-server",pid=38744,fd=161))LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1475,fd=5))LISTEN 0 512 127.0.0.1:34555 0.0.0.0:* users:(("pd-server",pid=37997,fd=41))LISTEN 0 512 127.0.0.1:40091 0.0.0.0:* users:(("pd-server",pid=37997,fd=40))LISTEN 0 128 0.0.0.0:20292 0.0.0.0:* users:(("TiFlashMain",pid=43577,fd=145))LISTEN 0 64 127.0.0.1:39945 0.0.0.0:* users:(("cockpit-bridge",pid=6966,fd=13))LISTEN 0 200 0.0.0.0:8234 0.0.0.0:* users:(("TiFlashMain",pid=43577,fd=41))LISTEN 0 512 *:2379 : users:(("pd-server",pid=37997,fd=9))LISTEN 0 512 *:9100 : users:(("node_exporter",pid=46643,fd=3))LISTEN 0 512 *:2380 : users:(("pd-server",pid=37997,fd=8))LISTEN 0 128 :4236 : users:(("dmap",pid=1643,fd=5))LISTEN 0 128 [::]:111 [::]: users:(("rpcbind",pid=966,fd=9))LISTEN 0 128 *:5236 : users:(("dmserver",pid=1642,fd=5))LISTEN 0 128 :5237 : users:(("dmserver",pid=1646,fd=5))LISTEN 0 128 [::]:22 [::]: users:(("sshd",pid=1475,fd=6))LISTEN 0 512 *:3000 : users:(("grafana-server",pid=45787,fd=8))LISTEN 0 512 [::ffff:172.16.60.94]:3930 : users:(("TiFlashMain",pid=43577,fd=37))LISTEN 0 512 [::ffff:172.16.60.94]:3930 : users:(("TiFlashMain",pid=43577,fd=38))LISTEN 0 512 *:9115 : users:(("blackbox_export",pid=47105,fd=3))LISTEN 0 512 *:10080 : users:(("tidb-server",pid=42543,fd=32))LISTEN 0 512 *:4000 : users:(("tidb-server",pid=42543,fd=27))LISTEN 0 512 *:20160 : users:(("tikv-server",pid=38741,fd=101))LISTEN 0 512 *:20160 : users:(("tikv-server",pid=38741,fd=102))LISTEN 0 512 *:20160 : users:(("tikv-server",pid=38741,fd=103))LISTEN 0 512 *:20160 : users:(("tikv-server",pid=38741,fd=104))LISTEN 0 512 *:20160 : users:(("tikv-server",pid=38741,fd=105))LISTEN 0 512 *:20161 : users:(("tikv-server",pid=38743,fd=99))LISTEN 0 512 *:20161 : users:(("tikv-server",pid=38743,fd=102))LISTEN 0 512 *:20161 : users:(("tikv-server",pid=38743,fd=103))LISTEN 0 512 *:20161 : users:(("tikv-server",pid=38743,fd=104))LISTEN 0 512 *:20161 : users:(("tikv-server",pid=38743,fd=105))LISTEN 0 128 *:9090 : users:(("cockpit-ws",pid=6886,fd=3),("systemd",pid=1,fd=160))LISTEN 0 512 *:20162 : users:(("tikv-server",pid=38744,fd=99))LISTEN 0 512 *:20162 : users:(("tikv-server",pid=38744,fd=100))LISTEN 0 512 *:20162 : users:(("tikv-server",pid=38744,fd=101))LISTEN 0 512 *:20162 : users:(("tikv-server",pid=38744,fd=102))LISTEN 0 512 *:20162 : users:(("tikv-server",pid=38744,fd=103))LISTEN 0 512 *:20170 : users:(("TiFlashMain",pid=43577,fd=92))LISTEN 0 512 *:20170 : users:(("TiFlashMain",pid=43577,fd=93))LISTEN 0 512 *:20170 : users:(("TiFlashMain",pid=43577,fd=94))LISTEN 0 512 *:20170 : users:(("TiFlashMain",pid=43577,fd=95))LISTEN 0 512 *:20170 : users:(("TiFlashMain",pid=43577,fd=96))

停用系统自带的cockpit组件,此操作系统为银河麒麟V10,一般CentOS7.6不会带这个。 [root@dmdca log]# systemctl status cockpit ● cockpit.service - Cockpit Web Service Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled) Active: active (running) since Wed 2023-10-25 08:54:42 CST; 3h 10min ago Docs: man:cockpit-ws(8) Main PID: 6886 (cockpit-ws) Tasks: 9 Memory: 7.8M CGroup: /system.slice/cockpit.service ├─6886 /usr/libexec/cockpit-ws └─6964 /usr/bin/ssh-agent

10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method 10月 25 12:05:05 dmdca cockpit-ws[6886]: received unsupported HTTP method [root@dmdca log]# systemctl stop cockpitWarning: Stopping cockpit.service, but it can still be activated by: cockpit.socket [root@dmdca log]# systemctl status cockpit ● cockpit.service - Cockpit Web Service Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled) Active: active (running) since Wed 2023-10-25 12:05:17 CST; 3s ago Docs: man:cockpit-ws(8) Process: 57451 ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root --group=cockpit-ws --selinux-type=etc_t (code=exited, status=0/SUCCESS) Main PID: 57454 (cockpit-ws) Tasks: 3 Memory: 2.6M CGroup: /system.slice/cockpit.service └─57454 /usr/libexec/cockpit-ws

10月 25 12:05:18 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:19 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:20 dmdca cockpit-ws[57454]: received unsupported HTTP method 10月 25 12:05:20 dmdca cockpit-ws[57454]: received unsupported HTTP method [root@dmdca log]# systemctl stop cockpitWarning: Stopping cockpit.service, but it can still be activated by: cockpit.socket [root@dmdca log]# systemctl stop cockpit.socket [root@dmdca log]# systemctl status cockpit ● cockpit.service - Cockpit Web Service Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled) Active: inactive (dead) since Wed 2023-10-25 12:05:36 CST; 3s ago Docs: man:cockpit-ws(8) Process: 57626 ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root --group=cockpit-ws --selinux-type=etc_t (code=exited, status=0/SUCCESS) Process: 57628 ExecStart=/usr/libexec/cockpit-ws (code=killed, signal=TERM) Main PID: 57628 (code=killed, signal=TERM)

10月 25 12:05:35 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca cockpit-ws[57628]: received unsupported HTTP method 10月 25 12:05:36 dmdca systemd[1]: Stopping Cockpit Web Service... 10月 25 12:05:36 dmdca systemd[1]: cockpit.service: Succeeded. 10月 25 12:05:36 dmdca systemd[1]: Stopped Cockpit Web Service.

[root@dmdca log]# tiup cluster display tidb-clustertiup is checking updates for component cluster ... Starting component cluster: /data/components/cluster/v1.13.1/tiup-cluster display tidb-cluster Cluster type: tidb Cluster name: tidb-cluster Cluster version: v7.4.0 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.60.94:2379/dashboard Grafana URL: http://172.16.60.94:3000 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir


172.16.60.94:3000 grafana 172.16.60.94 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000 172.16.60.94:2379 pd 172.16.60.94 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 172.16.60.94:9090 prometheus 172.16.60.94 9090/12020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090 172.16.60.94:4000 tidb 172.16.60.94 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 172.16.60.94:9000 tiflash 172.16.60.94 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000 172.16.60.94:20160 tikv 172.16.60.94 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 172.16.60.94:20161 tikv 172.16.60.94 20161/20181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161 172.16.60.94:20162 tikv 172.16.60.94 20162/20182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162 Total nodes: 8

至此单机集群部署完毕。

五、总结

1、简单测试使用playground方式运行即可。

2、单机集群部署也是非常方便,还要得益于强大的TIUP工具。

3、强大的运维监控平台非常直观,就是Grafana没有中文界面,官方能否出个插件呢。

3
4
3
2

版权声明:本文为 TiDB 社区用户原创文章,遵循 CC BY-NC-SA 4.0 版权协议,转载请附上原文出处链接和本声明。

评论
暂无评论