1
0
0
0
专栏/.../

TiDB 配置参数修改与系统变量修改步骤

 dba_gc  发表于  2021-12-08

一、TiDB 配置参数修改

注意事项1:tidb-test 为集群名称
注意事项2:参数修改前与修改后备份.tiup目录
注意事项3:通过 tiup cluster edit-config 来修改配置参数

1、修改配置参数

tiup cluster edit-config tidb-test

2、元数据存储地址

cat ~/.tiup/storage/cluster/clusters/tidb-test/meta.yaml

3、配置文件结构

配置文件按作用域分为 global 与 local

  • global:对所有节点有效(优先级低)
  • local:只对当前节点有效(优先级高)

若在 global 配置和单个实例中存在同名配置项,那么以 local 配置的为准。

global 配置:

global:集群全局配置,可单独针对每个组件配置
monitored:监控服务配置,即 blackbox exporter 和 node exporter
server_configs:各个组件全局配置,可单独针对每个组件配置

local 配置:

pd_servers:PD 实例的配置,用来指定 PD 组件部署到哪些机器上
tidb_servers:TiDB 实例的配置,用来指定 TiDB 组件部署到哪些机器上
tikv_servers:TiKV 实例的配置,用来指定 TiKV 组件部署到哪些机器上
pump_servers:Pump 实例的配置,用来指定 Pump 组件部署到哪些机器上
drainer_servers:Drainer 实例的配置,用来指定 Drainer 组件部署到哪些机器上
cdc_servers:CDC 实例的配置,用来指定 CDC 组件部署到哪些机器上
monitoring_servers:用来指定 Prometheus 和 NGMonitoring 部署在哪些机器上,TiUP 支持部署多台 Prometheus 实例,但真实投入使用的只有第一个
grafana_servers:Grafana 实例的配置,用来指定 Grafana 部署在哪台机器上
alertmanager_servers:Alertemanager 实例的配置,用来指定 Alertmanager 部署在哪些机器上

4、资源限制

如果是多实例混部,或者和其他服务混部。可以限制组件的最大使用量。配置在local host

  - host: 172.16.188.123
    resource_control:
        memory_limit: 100G
        cpu_quota: 1000%

5、各组件启动顺序

启动顺序
PD -> TiKV -> Pump -> TiDB -> TiFlash -> Drainer
升级顺序相同
关闭顺序相反

如果有TiCDC组件则启动顺序是这样的
PD -> TiKV -> Pump -> TiDB -> TiCDC

如果同时包含 TiCDC TiFlash -> Drainer 这3个组件,顺序是什么样的?

二、系统变量修改

这块比较简单,和MySQL语法兼容。
修改完之后对新的session生效,不需要滚动重启组件。

1、查看变量

SHOW VARIABLES LIKE ‘’;

2、修改变量

SET GLOBAL param=value;

修改完之后会固化到 TiKV 的表中INFORMATION_SCHEMA.SESSION_VARIABLES。不用担心重启后丢失修改。妈妈在也不用担心我的学习了。

三、配置文件模板

user: tidb
tidb_version: v4.0.15
last_ops_ver: |-
  v1.2.3 tiup
  Go Version: go1.13
  Git Branch: release-1.2
  GitHash: df7e28a
topology:
  global:
    user: tidb
    ssh_port: 22
    ssh_type: builtin
    deploy_dir: /data/tdb
    data_dir: /data/tdb/data
    os: linux
    arch: amd64
  monitored:
    node_exporter_port: 9100
    blackbox_exporter_port: 9115
    deploy_dir: /data/tdb/monitor-9100
    data_dir: /data/tdb/data/monitor-9100
    log_dir: /data/tdb/monitor-9100/log
  server_configs:
    tidb:
      alter-primary-key: true
      binlog.enable: true
      binlog.ignore-error: false
      binlog.write-timeout: 15s
      compatible-kill-query: false
      enable-streaming: true
      host: 0.0.0.0
      lease: 45s
      log.enable-timestamp: true
      log.expensive-threshold: 10000
      log.file.max-days: 8
      log.format: text
      log.level: info
      log.query-log-max-len: 4096
      log.slow-threshold: 500
      lower-case-table-names: 2
      oom-action: log
      performance.committer-concurrency: 256
      performance.stmt-count-limit: 500000
      performance.tcp-keep-alive: true
      performance.txn-total-size-limit: 104857600
      prepared-plan-cache.enabled: true
      proxy-protocol.networks: 172.16.188.123
      run-ddl: true
      split-table: true
      store: tikv
      tikv-client.grpc-connection-count: 32
      tikv-client.max-batch-size: 256
      token-limit: 1500
    tikv:
      coprocessor.region-max-size: 384MB
      coprocessor.region-split-size: 256MB
      gRPC.grpc-concurrency: 8
      log-level: info
      raftdb.max-background-jobs: 8
      raftstore.apply-max-batch-size: 16384
      raftstore.apply-pool-size: 8
      raftstore.hibernate-regions: true
      raftstore.raft-max-inflight-msgs: 20480
      raftstore.raft-max-size-per-msg: 2MB
      raftstore.region-split-check-diff: 32MB
      raftstore.store-max-batch-size: 16384
      raftstore.store-pool-size: 8
      raftstore.sync-log: false
      readpool.coprocessor.max-tasks-per-worker-normal: 8000
      readpool.unified.max-thread-count: 32
      rocksdb.bytes-per-sync: 512MB
      rocksdb.compaction-readahead-size: 2MB
      rocksdb.defaultcf.level0-slowdown-writes-trigger: 32
      rocksdb.defaultcf.level0-stop-writes-trigger: 64
      rocksdb.defaultcf.max-write-buffer-number: 24
      rocksdb.defaultcf.write-buffer-size: 256MB
      rocksdb.lockcf.level0-slowdown-writes-trigger: 32
      rocksdb.lockcf.level0-stop-writes-trigger: 64
      rocksdb.max-background-flushes: 4
      rocksdb.max-background-jobs: 8
      rocksdb.max-sub-compactions: 4
      rocksdb.use-direct-io-for-flush-and-compaction: true
      rocksdb.wal-bytes-per-sync: 256MB
      rocksdb.writecf.level0-slowdown-writes-trigger: 32
      rocksdb.writecf.level0-stop-writes-trigger: 64
      rocksdb.writecf.max-write-buffer-number: 24
      rocksdb.writecf.write-buffer-size: 256MB
      storage.block-cache.capacity: 8GB
      storage.scheduler-concurrency: 4096000
      storage.scheduler-worker-pool-size: 8
    pd:
      auto-compaction-mod: periodic
      auto-compaction-retention: 10m
      quota-backend-bytes: 17179869184
    tiflash: {}
    tiflash-learner: {}
    pump: {}
    drainer: {}
    cdc: {}
tidb_servers:
  - host: 172.16.188.123
    ssh_port: 22
    port: 4000
    status_port: 10080
    deploy_dir: /data/tdb/tidb-4000
    arch: amd64
    os: linux
  - host: 172.16.188.143
    ssh_port: 22
    port: 4000
    status_port: 10080
    deploy_dir: /data/tdb/tidb-4000
    arch: amd64
    os: linux
  tikv_servers:
  - host: 172.16.188.140
    ssh_port: 22
    port: 20160
    status_port: 20180
    deploy_dir: /data/tdb/tikv-20160
    data_dir: /data/tdb/data/tikv-20160
    arch: amd64
    os: linux
  - host: 172.16.188.143
    ssh_port: 22
    port: 20160
    status_port: 20180
    deploy_dir: /data/tdb/tikv-20160
    data_dir: /data/tdb/data/tikv-20160
    arch: amd64
    os: linux
  - host: 172.16.188.113
    ssh_port: 22
    port: 20160
    status_port: 20180
    deploy_dir: /data/tdb/tikv-20160
    data_dir: /data/tdb/data/tikv-20160
    arch: amd64
    os: linux
  tiflash_servers: []
  pd_servers:
  - host: 172.16.188.120
    ssh_port: 22
    name: pd-172.16.188.120-2379
    client_port: 2379
    peer_port: 2380
    deploy_dir: /data/tdb/pd-2379
    data_dir: /data/tdb/data/pd-2379
    arch: amd64
    os: linux
  - host: 172.16.188.118
    ssh_port: 22
    name: pd-172.16.188.118-2379
    client_port: 2379
    peer_port: 2380
    deploy_dir: /data/tdb/pd-2379
    data_dir: /data/tdb/data/pd-2379
    arch: amd64
    os: linux
  - host: 172.16.188.125
    ssh_port: 22
    name: pd-172.16.188.125-2379
    client_port: 2379
    peer_port: 2380
    deploy_dir: /data/tdb/pd-2379
    data_dir: /data/tdb/data/pd-2379
    arch: amd64
    os: linux
  pump_servers:
  - host: 172.16.188.143
    ssh_port: 22
    port: 8250
    deploy_dir: /data/pump
    data_dir: /data/pump/data
    log_dir: /data/pump/log
    config:
      gc: 5
    arch: amd64
    os: linux
  cdc_servers:
  - host: 172.16.188.120
    ssh_port: 22
    port: 8300
    deploy_dir: /data/tdb/cdc-8300
    arch: amd64
    os: linux
  monitoring_servers:
  - host: 172.16.188.113
    ssh_port: 22
    port: 9090
    deploy_dir: /data/tdb/prometheus-9090
    data_dir: /data/tdb/data/prometheus-9090
    arch: amd64
    os: linux
  grafana_servers:
  - host: 172.16.188.123
    ssh_port: 22
    port: 3000
    deploy_dir: /data/tdb/grafana-3000
    arch: amd64
    os: linux
  alertmanager_servers:
  - host: 172.16.188.123
    ssh_port: 22
    web_port: 9093
    cluster_port: 9094
    deploy_dir: /data/tdb/alertmanager-9093
    data_dir: /data/tdb/data/alertmanager-9093
    arch: amd64
    os: linux

参考资料

https://github.com/pingcap/docs-cn/tree/master/config-templates
https://docs.pingcap.com/zh/tidb/stable/tiup-cluster-topology-reference

1
0
0
0

版权声明:本文为 TiDB 社区用户原创文章,遵循 CC BY-NC-SA 4.0 版权协议,转载请附上原文出处链接和本声明。

评论
暂无评论