0
0
0
0
专栏/.../

使用 Vagrant + VirtualBox 虚拟机搭建TiDB v5.4 实验环境

 junyangma  发表于  2022-06-17

实例环境配置信息

硬件信息: Intel i7(8c) + 16G 内存 + 1T SSD

软件:Oracle VM Virtual Box 6.1.26 + Vagrant 2.2.16

ISO:CentOS-7.9-x86_64-DVD-2009

TiDB版本:TiDB V5.4

虚拟机VM数量:5个

各个VM配置:Cpu:1c , Memory:2G 硬盘50G

各个虚拟机节点信息:

组件 虚拟机名称 机器名称 IP地址 数量
pd tidb-pd tidb-pd 192.168.56.160 1
altermanager tidb-pd tidb-pd 192.168.56.160
prometheus tidb-pd tidb-pd 192.168.56.160
grafana tidb-pd tidb-pd 192.168.56.160
tidb-server tidb-server tidb-tidb 192.168.56.161 1
tikv1 tidb-tikv1 tidb-tikv1 192.168.56.162 1
tikv2 tidb-tikv2 tidb-tikv2 192.168.56.163 1
tiflash tidb-tiflash tidb-tiflash 192.168.56.164 1

各组件网络端口配置要求

组件 默认端口 说明
TiDB 4000 应用及 DBA 工具访问通信端口
TiDB 10080 TiDB 状态信息上报通信端口
TiKV 20160 TiKV 通信端口
TiKV 20180 TiKV 状态信息上报通信端口
PD 2379 提供 TiDB 和 PD 通信端口
PD 2380 PD 集群节点间通信端口
TiFlash 9000 TiFlash TCP 服务端口
TiFlash 8123 TiFlash HTTP 服务端口
TiFlash 3930 TiFlash RAFT 服务和 Coprocessor 服务端口
TiFlash 20170 TiFlash Proxy 服务端口
TiFlash 20292 Prometheus 拉取 TiFlash Proxy metrics 端口
TiFlash 8234 Prometheus 拉取 TiFlash metrics 端口
Pump 8250 Pump 通信端口
Drainer 8249 Drainer 通信端口
CDC 8300 CDC 通信接口
Prometheus 9090 Prometheus 服务通信端口
Node_exporter 9100 TiDB 集群每个节点的系统信息上报通信端口
Blackbox_exporter 9115 Blackbox_exporter 通信端口,用于 TiDB 集群端口监控
Grafana 3000 Web 监控服务对外服务和客户端(浏览器)访问端口
Alertmanager 9093 告警 web 服务端口
Alertmanager 9094 告警通信端口

Windows 10 下安装VirtualBox和Vagrant

软件下载地址

Oracle VM VirtualBox下载网址:https://www.virtualbox.org/wiki/Downloads

Vagrant下载网址:https://www.vagrantup.com/downloads

Vagrant Box文件地址:https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=centos7

安装VritualBox Oracle VM

  • 下载安装文件

image.png

VirtualBox 是一款开源的虚拟机软件,和VMWare是同类型的软件,用于在当前的电脑上创建虚拟机。

VirtualBox 6.1.34 下载地址 https://download.virtualbox.org/virtualbox/6.1.34/VirtualBox-6.1.34a-150636-Win.exe

VirtualBox 6.1.34 Oracle VM VirtualBox 扩展包下载地址 https://download.virtualbox.org/virtualbox/6.1.34/Oracle_VM_VirtualBox_Extension_Pack-6.1.34.vbox-extpack

  • 安装VirtualBox

  • 双击下载好的VirtualBox-6.1.34a-150636-Win.exe文件进行安装,

  • image.png

  • 点击“下一步”

  • image.png

  • 设置安装位置,点击“下一步”

image.png

点击“下一步”

image.png

点击”是“

image.png

点击”安装"

image.png

点击“完成”。

安装过程非常简单,根据提示点击一下就完成对VirtualBox的安装。

  • 安装VirtualBox 扩展包

  • 双击下载的“Oracle_VM_VirtualBox_Extension_Pack-6.1.34.vbox-extpack” 扩展包文件,根据提示进行安装。

修改VirtualBox 配置信息

  • 修改虚拟机默认存放路径

点击菜单“管理”-->“全局设定” 修改 "默认虚拟电脑位置:" 的值为 g:\ovm_machine

image.png

  • 添加网卡管理

  • 菜单“管理”-->“主机网络管理器”,点击“创建”

  • image.png

安装Vagrant

Vagrant 2.2.19 Windows版本下载地址 https://releases.hashicorp.com/vagrant/2.2.19/vagrant_2.2.19_x86_64.msi

双击“vagrant_2.2.19_x86_64.msi"进行安装

image.png

点击”Next“

image.png

点选“复选框”,点击”Next”

image.png

设置安装路径,点击”Next“

image.png

点击”Install"。

image.png

点击“Finish”完成安装。

Vagrant设置Path环境变量

点击“此电脑”右键选“属性”,点“高级系统设置”,

image.png

在调出窗口中点击“高级”标签栏,点击“环境变量”

image.png

选择系统变量的“Path”,点击“编辑”,新增“G:\HashiCorp\Vagrant\bin”变量值。

查看Vagrant安装版本

打开cmd窗口,输入vagrant -v

image.png

vagrant 使用

vagrant创建虚拟机

查找虚拟镜像

在线查找需要的box,官方网址:https://app.vagrantup.com/boxes/search 搜索centos7虚拟机box。

image.png

在线安装

image.png

#PS G:\HashiCorp\vagrant_vbox_data\centos7_test> pwd
​
Path
----
G:\HashiCorp\vagrant_vbox_data\centos7_test
​
---初始化Vagrantfile文件
PS G:\HashiCorp\vagrant_vbox_data\centos7_test> vagrant init generic/centos7
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
PS G:\HashiCorp\vagrant_vbox_data\centos7_test> dir
​
​
    目录: G:\HashiCorp\vagrant_vbox_data\centos7_test
​
​
Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----        2022/06/04     15:16           3091 Vagrantfile
​
​
PS G:\HashiCorp\vagrant_vbox_data\centos7_test>vagrant up
​
​

备注: 使用vagrant创建虚拟机后,默认创建了vagrant用户,密码是vagrant。 root用户密码也是vagrant。

vagrant 命令

描述 命令 描述 命令
在初始化完的文件夹内启动虚拟机 vagrant up 查找虚拟机的运行状态 vagrant status
ssh登录启动的虚拟机 vagrant ssh 挂起启动的虚拟机 vagrant suspend
唤醒虚拟机 vagrant resume 重启 虚拟机 vagrant reload
关闭虚拟机 vagrant halt 删除当前虚拟机 vagrant destroy
在终端里对开发环境进行打包 vagrant package 修改文件重启(相当于先 halt,再 up) vagrant reload

box管理命令

描述 命令 描述 命令
查看本地box列表 vagrant box list 添加box到列表 vagrant box add name url
从box列表移除 vagrant box remove name 输出用于ssh连接的一些信息 vagrant ssh-config

安装TiDB过程使用shell文件

## 文件存放路径

Vagrantfile 配置文件及shell文件存放路径

    目录: G:\HashiCorp\vagrant-master\TiDB-5.4
​
​
Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        2022/06/16     17:24                .vagrant
d-----        2022/06/16     17:12                shared_scripts
-a----        2022/06/16     17:29           1938 Vagrantfile
​
PS G:\HashiCorp\vagrant-master\TiDB-5.4> tree /F
卷 SSD 的文件夹 PATH 列表
卷序列号为 E22C-4CB0
G:.
│ Vagrantfile
│
└─shared_scripts
        root_setup.sh
        setup.sh
        shell_init_os.sh
        tiup_deploy.sh
 

备注:

  • shared_scripts 目录存放虚拟机初始化的系统配置脚本。

    1. setup.sh:Vagrantfile 调用shell文件进行系统配置,此脚本内容是执行root_setup.sh
    2. root_setup.sh:设置主机名与sshd配置,调用shell_init_os.sh 脚本
    3. shell_init_os.sh:对安装tidb前进行操作系统进行配置。
    4. tiup_deploy.sh:安装tiup工具软件
  • Vagrantfile 文件是vagrant 的虚拟机配置文件

setup.sh 文件内容

#/bin/bash
sudo bash -c 'sh /vagrant_scripts/root_setup.sh'

root_setup.sh文件内容

#/bin/bash
if [ -f /vagrant_config/install.env ]; then
	. /vagrant_config/install.env
fi

#设置代理
echo "******************************************************************************"
echo "set http proxy." `date`
echo "******************************************************************************"
if [ "$HTTP_PROXY" != "" ]; then
    echo "http_proxy=http://${HTTP_PROXY}" >> /etc/profile
    echo "https_proxy=http://${HTTP_PROXY}" >> /etc/profile
    echo "export http_proxy https_proxy" >> /etc/profile
    source /etc/profile
fi

#安装package 
yum install -y wget net-tools sshpass

#设置PS1
export LS_COLORS='no=00:fi=00:di=01;33;40:ln=01;36;40:'
export PS1="\[\033[01;35m\][\[\033[00m\]\[\033[01;32m\]\u@\h\[\033[00m\] \[\033[01;34m\]\w\[\033[00m\]\[\033[01;35m\]]\[\033[00m\]\$ "
echo "alias l='ls -lrtha'" >>/root/.bashrc
#echo "alias vi=vim" >>/root/.bashrc
source /root/.bashrc

#修改root密码
if [ "$ROOT_PASSWORD" == "" ]; then
	ROOT_PASSWORD="rootpasswd"
fi

echo "******************************************************************************"
echo "Set root password and change ownership." `date`
echo "******************************************************************************"
echo -e "${ROOT_PASSWORD}\n${ROOT_PASSWORD}" | passwd

#设置时区
timedatectl set-timezone Asia/Shanghai

#关闭firewalld
systemctl stop firewalld.service
systemctl disable firewalld.service

#设置selinux 
sed -i "s?SELINUX=enforcing?SELINUX=disabled?" /etc/selinux/config
setenforce  0

#设置sshd_config
echo "******************************************************************************"
echo "Set sshd service and disable firewalld service." `date`
echo "******************************************************************************"
sed -i "s?^#PermitRootLogin yes?PermitRootLogin yes?" /etc/ssh/sshd_config
sed -i "s?^#PasswordAuthentication yes?PasswordAuthentication yes?" /etc/ssh/sshd_config
sed -i "s?^PasswordAuthentication no?#PasswordAuthentication no?" /etc/ssh/sshd_config
sed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_config
systemctl restart sshd.service

#设置主机名
if [ "$PUBLIC_SUBNET" != "" ]; then
	IP_NET=`echo $PUBLIC_SUBNET |cut -d"." -f1,2,3`
	IPADDR=`ip addr |grep $IP_NET |awk -F"/" '{print $1}'|awk -F" " '{print $2}'`
	PRIF=`grep $IPADDR /vagrant_config/install.env |awk -F"_" '{print $1}'`
	if [ "$PRIF" != "" ]; then
		HOSTNAME=`grep $PRIF"_HOSTNAME" /vagrant_config/install.env |awk -F"=" '{print $2}'`
		hostnamectl set-hostname $HOSTNAME

		#设置/etc/hosts
		CNT=`grep $IPADDR /etc/hosts|wc -l `
		if [ "$CNT" == "0" ]; then
			echo "$IPADDR   $HOSTNAME">> /etc/hosts
		fi
	fi
fi

#初化始系统配置信息
if [ -f /vagrant_scripts/shell_init_os.sh ]; then
	sh /vagrant_scripts/shell_init_os.sh
fi


shell_init_os.sh文件内容

#/bin/bash
#1.检测及关闭系统 swap
echo "vm.swappiness = 0">> /etc/sysctl.conf
swapoff -a && swapon -a
sysctl -p

#2.检测及关闭目标部署机器的防火墙
#关闭firewalld
systemctl stop firewalld.service
systemctl disable firewalld.service

#设置selinux 
sed -i "s?SELINUX=enforcing?SELINUX=disabled?" /etc/selinux/config
setenforce  0

#3.检测及安装 NTP 服务
yum -y install numactl 
yum -y install ntp ntpdate 

#设置NTP
systemctl status ntpd.service
systemctl start ntpd.service 
systemctl enable ntpd.service
ntpstat

#4.检查和配置操作系统优化参数
#关闭THP和NUMA
RESULT=`grep "GRUB_CMDLINE_LINUX" /etc/default/grub |grep "transparent_hugepage"`
if [ "$RESULT" == "" ]; then
    \cp /etc/default/grub /etc/default/grub.bak
    sed -i 's#quiet#quiet transparent_hugepage=never numa=off#g' /etc/default/grub
    grub2-mkconfig -o /boot/grub2/grub.cfg
    if [ -f /boot/efi/EFI/redhat/grub.cfg ]; then
        grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
    fi
fi 

#关闭透明大页
if [ -d /sys/kernel/mm/transparent_hugepage ]; then
    thp_path=/sys/kernel/mm/transparent_hugepage
elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
    thp_path=/sys/kernel/mm/redhat_transparent_hugepage
fi
echo "echo 'never' > ${thp_path}/enabled" >>  /etc/rc.d/rc.local
echo "echo 'never' > ${thp_path}/defrag"  >>  /etc/rc.d/rc.local    
echo 'never' > ${thp_path}/enabled
echo 'never' > ${thp_path}/defrag   
chmod +x /etc/rc.d/rc.local

#创建 CPU 节能策略配置服务。
 
#启动irqbalance服务
systemctl start irqbalance 
systemctl enable irqbalance

#执行以下命令修改 sysctl 参数。
echo "fs.file-max = 1000000">> /etc/sysctl.conf
echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
sysctl -p

#执行以下命令配置用户的 limits.conf 文件
cat << EOF >>/etc/security/limits.conf
tidb           soft    nofile          1000000
tidb           hard    nofile          1000000
tidb           soft    stack          32768
tidb           hard    stack          32768
EOF

#创建tidb用户
if [ "$TIDB_PASSWORD" == "" ]; then
    TIDB_PASSWORD="tidbpasswd"
fi
TIDB_PWD=`echo "$TIDB_PASSWORD" |openssl passwd -stdin`
useradd tidb -p "$TIDB_PWD" -m

#将tidb加入sudo
echo "tidb ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

tiup_deploy.sh文件内容

#/bin/bash

if [ -f /home/vagrant/Vagrantfile ]; then
	for siteip in  `cat /home/vagrant/Vagrantfile |grep ":eth1 =>" |awk -F"\"" '{print $2}'`; do     
		ping -c1 -W1 ${siteip} &> /dev/null
		if [ "$?" == "0" ]; then
			echo "$siteip is UP"
		else
			echo "$siteip is DOWN"
			exit -1
		fi
		
		if [ -f /root/.ssh/known_hosts ]; then
			sed -i '/${siteip}/d' /root/.ssh/known_hosts
		fi		
	done
fi

#设置ssh免密 
if [ "$ROOT_PASSWORD" == "" ]; then
	ROOT_PASSWORD="rootpasswd"
fi

rm -f ~/.ssh/id_rsa && ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa <<<y >/dev/null 2>&1
 
for ipaddr in `cat /home/vagrant/Vagrantfile |grep ":eth1 =>" |awk -F"\"" '{print $2}'`; do
    sshpass -p $ROOT_PASSWORD ssh-copy-id $ipaddr
done

#下载tidb工具
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh 

#生效tiup环境变量
source ~/.bash_profile

#安装TiUP cluster组件
tiup cluster

#更新TiUP Cluster到最新的版本
tiup update --self && tiup update cluster 

#查看TiUP Cluster的版本
echo "view tiup cluster version"
tiup --binary cluster


#生成tidb拓扑文件
cat > ~/topology.yaml<<EOF
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"

monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115


pd_servers:
  - host: 192.168.56.160
 

tidb_servers:
  - host: 192.168.56.161

tikv_servers:
  - host: 192.168.56.162
  - host: 192.168.56.163
  
tiflash_servers:
  - host: 192.168.56.164

monitoring_servers:
  - host: 192.168.56.160

grafana_servers:
  - host: 192.168.56.160

alertmanager_servers:
  - host: 192.168.56.160
EOF

创建使用Vagrantfile配置文件

新建Vagrantfile配置文件,boxes 项是配置虚拟机的IP地址,主机名,内存,cpu。

boxes = [
    {
        :name => "tidb-pd",
        :eth1 => "192.168.56.160",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22230
    },
    {
        :name => "tidb-server",
        :eth1 => "192.168.56.161",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22231
    },
    {
        :name => "tidb-tikv1",
        :eth1 => "192.168.56.162",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22232
    },
    {
        :name => "tidb-tikv2",
        :eth1 => "192.168.56.163",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22233
    },
    {
        :name => "tidb-tiflash",
        :eth1 => "192.168.56.164",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22234
    }
]
Vagrant.configure(2) do |config|
    config.vm.box = "generic/centos7"
    Encoding.default_external = 'UTF-8'
    config.vm.synced_folder ".", "/home/vagrant"
    #config.vm.synced_folder "./config", "/vagrant_config"
    config.vm.synced_folder "./shared_scripts", "/vagrant_scripts"
   
    
    boxes.each do |opts|
        config.vm.define opts[:name] do |config|
            config.vm.hostname = opts[:name]
            config.vm.network "private_network", ip: opts[:eth1]
            config.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: "true"
            config.vm.network "forwarded_port", guest: 22, host: opts[:sshport]
            #config.ssh.username = "root"
            #config.ssh.password = "root"
            #config.ssh.port=opts[:sshport]
            #config.ssh.insert_key = false
            #config.vm.synced_folder ".", "/vagrant", type: "rsync" 
            config.vm.provider "vmware_fusion" do |v|
                v.vmx["memsize"] = opts[:mem]
                v.vmx["numvcpus"] = opts[:cpu]
            end
            config.vm.provider "virtualbox" do |v|
                v.memory = opts[:mem];
                v.cpus = opts[:cpu];
                v.name = opts[:name];
                v.customize ['storageattach', :id, '--storagectl', "IDE Controller", '--port', '1', '--device', '0','--type', 'dvddrive', '--medium', 'G:\HashiCorp\repo_vbox\CentOS7\CentOS-7.9-x86_64-DVD-2009.iso']
            end
        end
    end
    
    
    config.vm.provision "shell", inline: <<-SHELL
        sh /vagrant_scripts/setup.sh
    SHELL
  
end
​

执行vagrant 创建虚拟机

在powershell或cmd窗口执行vagrant up 创建虚拟机,如下是其中一个虚拟机创建的输出记录

G:\HashiCorp\vagrant_vbox_data\TiDB-5.4>vagrant up
==> tidb-tiflash: Importing base box 'generic/centos7'...
==> tidb-tiflash: Matching MAC address for NAT networking...
==> tidb-tiflash: Checking if box 'generic/centos7' version '3.6.10' is up to date...
==> tidb-tiflash: Setting the name of the VM: tidb-tiflash
==> tidb-tiflash: Clearing any previously set network interfaces...
==> tidb-tiflash: Preparing network interfaces based on configuration...
    tidb-tiflash: Adapter 1: nat
    tidb-tiflash: Adapter 2: hostonly
==> tidb-tiflash: Forwarding ports...
    tidb-tiflash: 22 (guest) => 22234 (host) (adapter 1)
==> tidb-tiflash: Running 'pre-boot' VM customizations...
==> tidb-tiflash: Booting VM...
==> tidb-tiflash: Waiting for machine to boot. This may take a few minutes...
    tidb-tiflash: SSH address: 127.0.0.1:22234
    tidb-tiflash: SSH username: vagrant
    tidb-tiflash: SSH auth method: private key
    tidb-tiflash:
    tidb-tiflash: Vagrant insecure key detected. Vagrant will automatically replace
    tidb-tiflash: this with a newly generated keypair for better security.
    tidb-tiflash:
    tidb-tiflash: Inserting generated public key within guest...
    tidb-tiflash: Removing insecure key from the guest if it's present...
    tidb-tiflash: Key inserted! Disconnecting and reconnecting using new SSH key...
==> tidb-tiflash: Machine booted and ready!
==> tidb-tiflash: Checking for guest additions in VM...
    tidb-tiflash: The guest additions on this VM do not match the installed version of
    tidb-tiflash: VirtualBox! In most cases this is fine, but in rare cases it can
    tidb-tiflash: prevent things such as shared folders from working properly. If you see
    tidb-tiflash: shared folder errors, please make sure the guest additions within the
    tidb-tiflash: virtual machine match the version of VirtualBox you have installed on
    tidb-tiflash: your host and reload your VM.
    tidb-tiflash:
    tidb-tiflash: Guest Additions Version: 5.2.44
    tidb-tiflash: VirtualBox Version: 6.1
==> tidb-tiflash: Setting hostname...
==> tidb-tiflash: Configuring and enabling network interfaces...
==> tidb-tiflash: Mounting shared folders...
    tidb-tiflash: /home/vagrant => G:/HashiCorp/vagrant_vbox_data/TiDB-5.4
    tidb-tiflash: /vagrant_scripts => G:/HashiCorp/vagrant_vbox_data/TiDB-5.4/shared_scripts
==> tidb-tiflash: Running provisioner: shell...
    tidb-tiflash: Running: inline script
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: set http proxy. Thu Jun 16 09:48:05 UTC 2022
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: Loaded plugins: fastestmirror
    tidb-tiflash: Determining fastest mirrors
    tidb-tiflash:  * base: mirrors.ustc.edu.cn
    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn
    tidb-tiflash:  * extras: mirrors.ustc.edu.cn
    tidb-tiflash:  * updates: mirrors.ustc.edu.cn
    tidb-tiflash: Package wget-1.14-18.el7_6.1.x86_64 already installed and latest version
    tidb-tiflash: Package net-tools-2.0-0.25.20131004git.el7.x86_64 already installed and latest version
    tidb-tiflash: Resolving Dependencies
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package sshpass.x86_64 0:1.06-2.el7 will be installed
    tidb-tiflash: --> Finished Dependency Resolution
    tidb-tiflash:
    tidb-tiflash: Dependencies Resolved
    tidb-tiflash:
    tidb-tiflash: ================================================================================
    tidb-tiflash:  Package           Arch             Version              Repository        Size
    tidb-tiflash: ================================================================================
    tidb-tiflash: Installing:
    tidb-tiflash:  sshpass           x86_64           1.06-2.el7           extras            21 k
    tidb-tiflash:
    tidb-tiflash: Transaction Summary
    tidb-tiflash: ================================================================================
    tidb-tiflash: Install  1 Package
    tidb-tiflash:
    tidb-tiflash: Total download size: 21 k
    tidb-tiflash: Installed size: 38 k
    tidb-tiflash: Downloading packages:
    tidb-tiflash: Running transaction check
    tidb-tiflash: Running transaction test
    tidb-tiflash: Transaction test succeeded
    tidb-tiflash: Running transaction
    tidb-tiflash:   Installing : sshpass-1.06-2.el7.x86_64                                    1/1
    tidb-tiflash:   Verifying  : sshpass-1.06-2.el7.x86_64                                    1/1
    tidb-tiflash:
    tidb-tiflash: Installed:
    tidb-tiflash:   sshpass.x86_64 0:1.06-2.el7
    tidb-tiflash:
    tidb-tiflash: Complete!
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: Set root password and change ownership. Thu Jun 16 09:49:49 UTC 2022
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: New password: BAD PASSWORD: The password contains the user name in some form
    tidb-tiflash: Changing password for user root.
    tidb-tiflash: passwd: all authentication tokens updated successfully.
    tidb-tiflash: Retype new password: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    tidb-tiflash: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: Set sshd service and disable firewalld service. Thu Jun 16 17:49:50 CST 2022
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: net.ipv6.conf.all.disable_ipv6 = 1
    tidb-tiflash: vm.swappiness = 0
    tidb-tiflash: Loaded plugins: fastestmirror
    tidb-tiflash: Loading mirror speeds from cached hostfile
    tidb-tiflash:  * base: mirrors.ustc.edu.cn
    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn
    tidb-tiflash:  * extras: mirrors.ustc.edu.cn
    tidb-tiflash:  * updates: mirrors.ustc.edu.cn
    tidb-tiflash: Resolving Dependencies
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package numactl.x86_64 0:2.0.12-5.el7 will be installed
    tidb-tiflash: --> Finished Dependency Resolution
    tidb-tiflash:
    tidb-tiflash: Dependencies Resolved
    tidb-tiflash:
    tidb-tiflash: ================================================================================
    tidb-tiflash:  Package           Arch             Version                Repository      Size
    tidb-tiflash: ================================================================================
    tidb-tiflash: Installing:
    tidb-tiflash:  numactl           x86_64           2.0.12-5.el7           base            66 k
    tidb-tiflash:
    tidb-tiflash: Transaction Summary
    tidb-tiflash: ================================================================================
    tidb-tiflash: Install  1 Package
    tidb-tiflash:
    tidb-tiflash: Total download size: 66 k
    tidb-tiflash: Installed size: 141 k
    tidb-tiflash: Downloading packages:
    tidb-tiflash: Running transaction check
    tidb-tiflash: Running transaction test
    tidb-tiflash: Transaction test succeeded
    tidb-tiflash: Running transaction
    tidb-tiflash:   Installing : numactl-2.0.12-5.el7.x86_64                                  1/1
    tidb-tiflash:   Verifying  : numactl-2.0.12-5.el7.x86_64                                  1/1
    tidb-tiflash:
    tidb-tiflash: Installed:
    tidb-tiflash:   numactl.x86_64 0:2.0.12-5.el7
    tidb-tiflash:
    tidb-tiflash: Complete!
    tidb-tiflash: Loaded plugins: fastestmirror
    tidb-tiflash: Loading mirror speeds from cached hostfile
    tidb-tiflash:  * base: mirrors.ustc.edu.cn
    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn
    tidb-tiflash:  * extras: mirrors.ustc.edu.cn
    tidb-tiflash:  * updates: mirrors.ustc.edu.cn
    tidb-tiflash: Resolving Dependencies
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package ntp.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
    tidb-tiflash: --> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-29.el7.centos.2.x86_64
    tidb-tiflash: ---> Package ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
    tidb-tiflash: --> Finished Dependency Resolution
    tidb-tiflash:
    tidb-tiflash: Dependencies Resolved
    tidb-tiflash:
    tidb-tiflash: ================================================================================
    tidb-tiflash:  Package              Arch        Version                       Repository
    tidb-tiflash:                                                                            Size
    tidb-tiflash: ================================================================================
    tidb-tiflash: Installing:
    tidb-tiflash:  ntp                  x86_64      4.2.6p5-29.el7.centos.2       base      549 k
    tidb-tiflash:  ntpdate              x86_64      4.2.6p5-29.el7.centos.2       base       87 k
    tidb-tiflash: Installing for dependencies:
    tidb-tiflash:  autogen-libopts      x86_64      5.18-5.el7                    base       66 k
    tidb-tiflash:
    tidb-tiflash: Transaction Summary
    tidb-tiflash: ================================================================================
    tidb-tiflash: Install  2 Packages (+1 Dependent package)
    tidb-tiflash:
    tidb-tiflash: Total download size: 701 k
    tidb-tiflash: Installed size: 1.6 M
    tidb-tiflash: Downloading packages:
    tidb-tiflash: --------------------------------------------------------------------------------
    tidb-tiflash: Total                                              309 kB/s | 701 kB  00:02
    tidb-tiflash: Running transaction check
    tidb-tiflash: Running transaction test
    tidb-tiflash: Transaction test succeeded
    tidb-tiflash: Running transaction
    tidb-tiflash:   Installing : autogen-libopts-5.18-5.el7.x86_64                            1/3
    tidb-tiflash:   Installing : ntpdate-4.2.6p5-29.el7.centos.2.x86_64                       2/3
    tidb-tiflash:   Installing : ntp-4.2.6p5-29.el7.centos.2.x86_64                           3/3
    tidb-tiflash:   Verifying  : ntpdate-4.2.6p5-29.el7.centos.2.x86_64                       1/3
    tidb-tiflash:   Verifying  : ntp-4.2.6p5-29.el7.centos.2.x86_64                           2/3
    tidb-tiflash:   Verifying  : autogen-libopts-5.18-5.el7.x86_64                            3/3
    tidb-tiflash:
    tidb-tiflash: Installed:
    tidb-tiflash:   ntp.x86_64 0:4.2.6p5-29.el7.centos.2 ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2
    tidb-tiflash:
    tidb-tiflash: Dependency Installed:
    tidb-tiflash:   autogen-libopts.x86_64 0:5.18-5.el7
    tidb-tiflash:
    tidb-tiflash: Complete!
    tidb-tiflash: ● ntpd.service - Network Time Service
    tidb-tiflash:    Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
    tidb-tiflash:    Active: inactive (dead)
    tidb-tiflash: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
    tidb-tiflash: unsynchronised
    tidb-tiflash:   time server re-starting
    tidb-tiflash:    polling server every 8 s
    tidb-tiflash: Generating grub configuration file ...
    tidb-tiflash: Found linux image: /boot/vmlinuz-3.10.0-1160.59.1.el7.x86_64
    tidb-tiflash: Found initrd image: /boot/initramfs-3.10.0-1160.59.1.el7.x86_64.img
    tidb-tiflash: Found linux image: /boot/vmlinuz-0-rescue-319af63f75e64c3395b38885010692bf
    tidb-tiflash: Found initrd image: /boot/initramfs-0-rescue-319af63f75e64c3395b38885010692bf.img
    tidb-tiflash: done
    tidb-tiflash: net.ipv6.conf.all.disable_ipv6 = 1
    tidb-tiflash: vm.swappiness = 0
    tidb-tiflash: fs.file-max = 1000000
    tidb-tiflash: net.core.somaxconn = 32768
    tidb-tiflash: net.ipv4.tcp_tw_recycle = 0
    tidb-tiflash: net.ipv4.tcp_syncookies = 0
    tidb-tiflash: vm.overcommit_memory = 1

登录tidb-pd虚拟机,安装tiup工具

使用root用户登录,执行tiup_deploy.sh脚本 安装tiup工具

[root@tidb-pd shared_scripts]$ sh tiup_deploy.sh
192.168.56.160 is UP
192.168.56.161 is UP
192.168.56.162 is UP
192.168.56.163 is UP
192.168.56.164 is UP
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
​
Number of key(s) added: 1
​
Now try logging into the machine, with:   "ssh '192.168.56.160'"
and check to make sure that only the key(s) you wanted were added.
​
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
​
Number of key(s) added: 1
​
Now try logging into the machine, with:   "ssh '192.168.56.161'"
and check to make sure that only the key(s) you wanted were added.
​
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
​
Number of key(s) added: 1
​
Now try logging into the machine, with:   "ssh '192.168.56.162'"
and check to make sure that only the key(s) you wanted were added.
​
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
​
Number of key(s) added: 1
​
Now try logging into the machine, with:   "ssh '192.168.56.163'"
and check to make sure that only the key(s) you wanted were added.
​
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
​
Number of key(s) added: 1
​
Now try logging into the machine, with:   "ssh '192.168.56.164'"
and check to make sure that only the key(s) you wanted were added.
​
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 6968k  100 6968k    0     0  1514k      0  0:00:04  0:00:04 --:--:-- 1514k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
tiup is checking updates for component cluster ...timeout!
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.10.2-linux-amd64.tar.gz 8.28 MiB / 8.28 MiB 100.00% 2.48 MiB/s
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster
Deploy a TiDB cluster for production
​
Usage:
  tiup cluster [command]
​
Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell
​
Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'
​
Use "tiup cluster help [command]" for more information about a command.
download https://tiup-mirrors.pingcap.com/tiup-v1.10.2-linux-amd64.tar.gz 6.81 MiB / 6.81 MiB 100.00% 3.53 MiB/s
Updated successfully!
component cluster version v1.10.2 is already installed
Updated successfully!
/root/.tiup/components/cluster/v1.10.2/tiup-cluster
​

初始化集群拓扑文件

在执行tish_deploy.sh 脚本后,生成了 /home/tidb/topology.yaml 集群拓扑文件

[tidb@tidb-pd ~]$ cat topology.yaml
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"
​
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
​
​
pd_servers:
  - host: 192.168.56.160
​
​
tidb_servers:
  - host: 192.168.56.161
​
tikv_servers:
  - host: 192.168.56.162
  - host: 192.168.56.163
​
tiflash_servers:
  - host: 192.168.56.164
​
monitoring_servers:
  - host:
​
grafana_servers:
  - host: 192.168.56.160
​
alertmanager_servers:
  - host: 192.168.56.160

执行部署命令

  • 检查集群存在的潜在风险:

    # tiup cluster check ./topology.yaml
    tiup is checking updates for component cluster ...
    Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster check ./topology.yaml
    ​
    ​
    ​
    ​
    ​
    + Detect CPU Arch Name
      - Detecting node 192.168.56.160 Arch info ... Done
      - Detecting node 192.168.56.162 Arch info ... Done
      - Detecting node 192.168.56.163 Arch info ... Done
      - Detecting node 192.168.56.161 Arch info ... Done
      - Detecting node 192.168.56.164 Arch info ... Done
    ​
    ​
    ​
    ​
    ​
    + Detect CPU OS Name
      - Detecting node 192.168.56.160 OS info ... Done
      - Detecting node 192.168.56.162 OS info ... Done
      - Detecting node 192.168.56.163 OS info ... Done
      - Detecting node 192.168.56.161 OS info ... Done
      - Detecting node 192.168.56.164 OS info ... Done
    + Download necessary tools
      - Downloading check tools for linux/amd64 ... Done
    + Collect basic system information
    + Collect basic system information
    + Collect basic system information
    + Collect basic system information
    + Collect basic system information
    + Collect basic system information
      - Getting system info of 192.168.56.164:22 ... Done
      - Getting system info of 192.168.56.160:22 ... Done
      - Getting system info of 192.168.56.162:22 ... Done
      - Getting system info of 192.168.56.163:22 ... Done
      - Getting system info of 192.168.56.161:22 ... Done
    + Check time zone
      - Checking node 192.168.56.164 ... Done
      - Checking node 192.168.56.160 ... Done
      - Checking node 192.168.56.162 ... Done
      - Checking node 192.168.56.163 ... Done
      - Checking node 192.168.56.161 ... Done
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
    + Check system requirements
      - Checking node 192.168.56.164 ... Done
      - Checking node 192.168.56.160 ... Done
      - Checking node 192.168.56.162 ... Done
      - Checking node 192.168.56.163 ... Done
      - Checking node 192.168.56.161 ... Done
      - Checking node 192.168.56.160 ... Done
      - Checking node 192.168.56.160 ... Done
      - Checking node 192.168.56.160 ... Done
    + Cleanup check files
      - Cleanup check files on 192.168.56.164:22 ... Done
      - Cleanup check files on 192.168.56.160:22 ... Done
      - Cleanup check files on 192.168.56.162:22 ... Done
      - Cleanup check files on 192.168.56.163:22 ... Done
      - Cleanup check files on 192.168.56.161:22 ... Done
      - Cleanup check files on 192.168.56.160:22 ... Done
      - Cleanup check files on 192.168.56.160:22 ... Done
      - Cleanup check files on 192.168.56.160:22 ... Done
    Node            Check         Result  Message
    ----            -----         ------  -------
    192.168.56.161  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
    192.168.56.161  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
    192.168.56.161  cpu-cores     Pass    number of CPU cores / threads: 1
    192.168.56.161  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
    192.168.56.161  swap          Warn    swap is enabled, please disable it for best performance
    192.168.56.161  network       Pass    network speed of eth0 is 1000MB
    192.168.56.161  network       Pass    network speed of eth1 is 1000MB
    192.168.56.161  thp           Pass    THP is disabled
    192.168.56.161  command       Pass    numactl: policy: default
    192.168.56.161  memory        Pass    memory size is 0MB
    192.168.56.161  selinux       Pass    SELinux is disabled
    192.168.56.161  service       Fail    service irqbalance is not running
    192.168.56.164  memory        Pass    memory size is 0MB
    192.168.56.164  network       Pass    network speed of eth0 is 1000MB
    192.168.56.164  network       Pass    network speed of eth1 is 1000MB
    192.168.56.164  disk          Warn    mount point / does not have 'noatime' option set
    192.168.56.164  service       Fail    service irqbalance is not running
    192.168.56.164  command       Pass    numactl: policy: default
    192.168.56.164  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
    192.168.56.164  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
    192.168.56.164  swap          Warn    swap is enabled, please disable it for best performance
    192.168.56.164  selinux       Pass    SELinux is disabled
    192.168.56.164  thp           Pass    THP is disabled
    192.168.56.164  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
    192.168.56.164  cpu-cores     Pass    number of CPU cores / threads: 1
    192.168.56.160  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
    192.168.56.160  memory        Pass    memory size is 0MB
    192.168.56.160  network       Pass    network speed of eth0 is 1000MB
    192.168.56.160  network       Pass    network speed of eth1 is 1000MB
    192.168.56.160  selinux       Pass    SELinux is disabled
    192.168.56.160  thp           Pass    THP is disabled
    192.168.56.160  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
    192.168.56.160  cpu-cores     Pass    number of CPU cores / threads: 1
    192.168.56.160  service       Fail    service irqbalance is not running
    192.168.56.160  command       Pass    numactl: policy: default
    192.168.56.160  swap          Warn    swap is enabled, please disable it for best performance
    192.168.56.160  disk          Warn    mount point / does not have 'noatime' option set
    192.168.56.162  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
    192.168.56.162  swap          Warn    swap is enabled, please disable it for best performance
    192.168.56.162  network       Pass    network speed of eth0 is 1000MB
    192.168.56.162  network       Pass    network speed of eth1 is 1000MB
    192.168.56.162  disk          Warn    mount point / does not have 'noatime' option set
    192.168.56.162  service       Fail    service irqbalance is not running
    192.168.56.162  command       Pass    numactl: policy: default
    192.168.56.162  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
    192.168.56.162  cpu-cores     Pass    number of CPU cores / threads: 1
    192.168.56.162  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
    192.168.56.162  memory        Pass    memory size is 0MB
    192.168.56.162  selinux       Pass    SELinux is disabled
    192.168.56.162  thp           Pass    THP is disabled
    192.168.56.163  selinux       Pass    SELinux is disabled
    192.168.56.163  thp           Pass    THP is disabled
    192.168.56.163  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
    192.168.56.163  disk          Warn    mount point / does not have 'noatime' option set
    192.168.56.163  cpu-cores     Pass    number of CPU cores / threads: 1
    192.168.56.163  swap          Warn    swap is enabled, please disable it for best performance
    192.168.56.163  memory        Pass    memory size is 0MB
    192.168.56.163  network       Pass    network speed of eth0 is 1000MB
    192.168.56.163  network       Pass    network speed of eth1 is 1000MB
    192.168.56.163  service       Fail    service irqbalance is not running
    192.168.56.163  command       Pass    numactl: policy: default
    192.168.56.163  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
    192.168.56.163  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
    
  • 自动修复集群存在的潜在风险:

    # tiup cluster check ~/topology.yaml --apply --user root
    
  • 部署 TiDB 集群:

    # tiup cluster deploy tidb-test v5.4.1 ./topology.yaml --user root
    

    以上部署示例中:

    • tidb-test 为部署的集群名称。
    • v5.4.1 为部署的集群版本,可以通过执行 tiup list tidb 来查看 TiUP 支持的最新可用版本。
    • 初始化配置文件为 topology.yaml
    • --user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
    • [-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。

    预期日志结尾输出 Deployed clustertidb-testsuccessfully 关键词,表示部署成功。

    查看 TiUP 管理的集群情况

    # tiup cluster list
    

    TiUP 支持管理多个 TiDB 集群,该命令会输出当前通过 TiUP cluster 管理的所有集群信息,包括集群名称、部署用户、版本、密钥信息等。

    检查部署的 TiDB 集群情况

    # tiup cluster display tidb-test
    

    启动集群

    安全启动是 TiUP cluster 从 v1.9.0 起引入的一种新的启动方式,采用该方式启动数据库可以提高数据库安全性。推荐使用安全启动。

    安全启动后,TiUP 会自动生成 TiDB root 用户的密码,并在命令行界面返回密码。

    注意:

    • 使用安全启动方式后,不能通过无密码的 root 用户登录数据库,你需要记录命令行返回的密码进行后续操作。
    • 该自动生成的密码只会返回一次,如果没有记录或者忘记该密码,请参照忘记 root 密码修改密码。

    方式一:安全启动

    # tiup cluster start tidb-test --init
    tiup is checking updates for component cluster ...
    Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster start tidb-test --init
    Starting cluster tidb-test...
    + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.164
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.160
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.160
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.160
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.160
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.162
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.163
    + [Parallel] - UserSSH: user=tidb, host=192.168.56.161
    + [ Serial ] - StartCluster
    Starting component pd
            Starting instance 192.168.56.160:2379
            Start instance 192.168.56.160:2379 success
    Starting component tikv
            Starting instance 192.168.56.163:20160
            Starting instance 192.168.56.162:20160
            Start instance 192.168.56.163:20160 success
            Start instance 192.168.56.162:20160 success
    Starting component tidb
            Starting instance 192.168.56.161:4000
            Start instance 192.168.56.161:4000 success
    Starting component tiflash
            Starting instance 192.168.56.164:9000
            Start instance 192.168.56.164:9000 success
    Starting component prometheus
            Starting instance 192.168.56.160:9090
            Start instance 192.168.56.160:9090 success
    Starting component grafana
            Starting instance 192.168.56.160:3000
            Start instance 192.168.56.160:3000 success
    Starting component alertmanager
            Starting instance 192.168.56.160:9093
            Start instance 192.168.56.160:9093 success
    Starting component node_exporter
            Starting instance 192.168.56.163
            Starting instance 192.168.56.161
            Starting instance 192.168.56.164
            Starting instance 192.168.56.160
            Starting instance 192.168.56.162
            Start 192.168.56.161 success
            Start 192.168.56.162 success
            Start 192.168.56.163 success
            Start 192.168.56.160 success
            Start 192.168.56.164 success
    Starting component blackbox_exporter
            Starting instance 192.168.56.163
            Starting instance 192.168.56.161
            Starting instance 192.168.56.164
            Starting instance 192.168.56.160
            Starting instance 192.168.56.162
            Start 192.168.56.163 success
            Start 192.168.56.162 success
            Start 192.168.56.161 success
            Start 192.168.56.164 success
            Start 192.168.56.160 success
    + [ Serial ] - UpdateTopology: cluster=tidb-test
    Started cluster `tidb-test` successfully
    The root password of TiDB database has been changed.
    The new password is: '45s6W&_w9!1KcB^aH8'.
    Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
    The generated password can NOT be get and shown again.
    

    预期结果如下,表示启动成功。

    Started cluster `tidb-test` successfully.
    The root password of TiDB database has been changed.
    The new password is: 'y_+3Hwp=*AWz8971s6'.
    Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
    The generated password can NOT be got again in future.
    

    方式二:普通启动

    # tiup cluster start tidb-test
    

    预期结果输出 Started clustertidb-testsuccessfully,表示启动成功。使用普通启动方式后,可通过无密码的 root 用户登录数据库。

    验证集群运行状态

    # tiup cluster display tidb-test
    tiup is checking updates for component cluster ...
    Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster display tidb-test
    Cluster type:       tidb
    Cluster name:       tidb-test
    Cluster version:    v5.4.1
    Deploy user:        tidb
    SSH type:           builtin
    Dashboard URL:      http://192.168.56.160:2379/dashboard
    Grafana URL:        http://192.168.56.160:3000
    ID                    Role          Host            Ports                            OS/Arch       Status   Data Dir                      Deploy Dir
    --                    ----          ----            -----                            -------       ------   --------                      ----------
    192.168.56.160:9093   alertmanager  192.168.56.160  9093/9094                        linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
    192.168.56.160:3000   grafana       192.168.56.160  3000                             linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
    192.168.56.160:2379   pd            192.168.56.160  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
    192.168.56.160:9090   prometheus    192.168.56.160  9090/12020                       linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
    192.168.56.161:4000   tidb          192.168.56.161  4000/10080                       linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
    192.168.56.164:9000   tiflash       192.168.56.164  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000       /tidb-deploy/tiflash-9000
    192.168.56.162:20160  tikv          192.168.56.162  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    192.168.56.163:20160  tikv          192.168.56.163  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    Total nodes: 8
    
    

预期结果输出:各节点 Status 状态信息为 Up 说明集群状态正常。

参考文档:https://docs.pingcap.com/zh/tidb/stable/check-before-deployment

0
0
0
0

版权声明:本文为 TiDB 社区用户原创文章,遵循 CC BY-NC-SA 4.0 版权协议,转载请附上原文出处链接和本声明。

评论
暂无评论