Pages

Saturday, 5 December 2015

基于go的分布式sql数据库-TiDB

(TiDB is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics. Try free: https://tidbcloud.com/signup
https://pingcap.com/ )
 
TiDB is a distributed SQL database compatible with MySQL protocol.

TiDB is a distributed SQL database. Inspired by the design of Google F1, TiDB supports the best features of both traditional RDBMS and NoSQL.
  • Horizontal scalability
    Grow TiDB as your business grows. You can increase the capacity simply by adding more machines.
  • Asynchronous schema changes
    Evolve TiDB schemas as your requirement evolves. You can add new columns and indices without stopping or affecting the on-going operations.
  • Consistent distributed transactions
    Think TiDB as a single-machine RDBMS. You can start a transaction that acrosses multiple machines without worrying about consistency. TiDB makes your application code simple and robust.
  • Compatible with MySQL protocol
    Use TiDB as MySQL. You can replace MySQL with TiDB to power your application without changing a single line of code in most cases.
  • Written in Go
    Enjoy TiDB as much as we love Go. We believe Go code is both easy and enjoyable to work with. Go makes us improve TiDB fast and makes it easy to dive into the codebase.
  • NewSQL over HBase
    Turns HBase into NewSQL database
  • Multiple storage engine support
    Power TiDB with your most favorite engines. TiDB supports many popular storage engines in single-machine mode. You can choose from goleveldb, LevelDB, RocksDB, LMDB, BoltDB and even more to come.

Status

TiDB is at its early age and under heavy development, all of the features mentioned above are fully implemented.
Please do not use it in production.

Roadmap

Read the Roadmap.

Quick start

Read the Quick Start

Architecture

architecture

FROM https://github.com/pingcap/tidb

--------

TiDB/TiKV/PD documentation in Chinese.  

https://docs.pingcap.com/zh

TiDB 文档

欢迎来到 TiDB 文档仓库!

这里存放的是 PingCAP 官网 TiDB 中文文档的源文件。官网英文文档的源文件则存放于 pingcap/docs

如果你发现或遇到了 TiDB 的文档问题,可随时提 Issue 来反馈,或者直接提交 Pull Request 来进行修改。

TiDB 文档维护方式及版本说明

目前,TiDB 的文档维护在以下 branch,对应着官网文档的不同版本:

文档仓库 branch 对应 TiDB 文档版本
master dev 最新开发版
release-6.0 6.0 开发里程碑版
release-5.4 5.4 稳定版
release-5.3 5.3 稳定版
release-5.2 5.2 稳定版
release-5.1 5.1 稳定版
release-5.0 5.0 稳定版
release-4.0 4.0 稳定版
release-3.1 3.1 稳定版
release-3.0 3.0 稳定版
release-2.1 2.1 稳定版

from https://github.com/pingcap/docs-cn

------

tidb-tools are some useful tool collections for TiDB.  

    tidb-tools

    tidb-tools are some useful tool collections for TiDB.

    How to build

    make build # build all tools
    
    make importer # build importer
    
    make sync_diff_inspector # build sync_diff_inspector
    
    make ddl_checker  # build ddl_checker
    

    When tidb-tools are built successfully, you can find the binary in the bin directory.

    Tool list

  • importer

    A tool for generating and inserting data to any database which is compatible with the MySQL protocol, like MySQL and TiDB.

  • sync_diff_inspector

    A tool for comparing two databases' data and outputting a brief report about the differences.

  • ddl_checker

    A tool for checking if DDL SQL can be successfully executed by TiDB

    from https://github.com/pingcap/tidb-tools

    ----------------------------------------------------- 

    What is TiProxy?

    TiProxy is a database proxy that is based on TiDB. It keeps client connections alive while the TiDB server upgrades, restarts, scales in, and scales out.

    TiProxy is forked from Weir.

    Features

    Connection Management

    When a TiDB instance restarts or shuts down, TiProxy migrates backend connections on this instance to other instances. In this way, the clients won't be disconnected.

    For more details, please refer to the blogs Achieving Zero-Downtime Upgrades with TiDB and Maintaining Database Connectivity in Serverless Infrastructure with TiProxy.

    Load Balance

    TiProxy routes new connections to backends based on their scores to keep load balanced. The score is basically calculated from the connections on each backend.

    Besides, when the clients create or close connections, TiProxy also migrates backend connections to keep the backends balanced.

    Service Discovery

    When a new TiDB instance starts, the TiProxy detects the new TiDB instance and migrates backend connections to the instance.

    The TiProxy also checks health on TiDB instances to ensure they are alive, and migrates the backend connections to other TiDB instances if any instance is down.

    Architecture

    For more details, see Design Doc.

    Future Plans

    TiProxy's role as a versatile database proxy is continuously evolving to meet the diverse needs of self-hosting users. Here are some of the key expectations that TiProxy is poised to fulfill:

    Tenant Isolation

    In a multi-tenant database environment that supports database consolidation, TiProxy offers the ability to route connections based on usernames or client addresses. This ensures the effective isolation of TiDB resources, safeguarding data and performance for different tenants.

    Traffic Management

    Sudden traffic spikes can catch any system off guard. TiProxy steps in with features like rate limiting and query refusal in extreme cases, enabling you to better manage and control incoming traffic to TiDB.

    Post-Upgrade Validation

    Ensuring the smooth operation of TiDB after an upgrade is crucial. TiProxy can play a vital role in this process by replicating traffic and replaying it on a new TiDB cluster. This comprehensive testing helps verify that the upgraded system works as expected.

    Build

    Build the binary locally:

    $ make

    Build a docker image:

    $ make docker

    Deployment

    Deploy with TiUP

    Refer to https://docs.pingcap.com/tidb/dev/tiproxy-overview#installation-and-usage.

    Deploy with TiDB-Operator

    Refer to https://docs.pingcap.com/tidb-in-kubernetes/stable/deploy-tiproxy.

    Deploy locally

  • Generate a self-signed certificate, which is used for the token-based authentication between TiDB and TiProxy.

For example, if you use openssl:

openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes \
  -keyout key.pem -out cert.pem -subj "/CN=example.com"

Put the certs and keys to all the TiDB servers. Make sure all the TiDB instances use the same certificate.

  1. Update the config.toml of TiDB instances:
security.auto-tls=true
security.session-token-signing-cert={path/to/cert.pem}
security.session-token-signing-key={path/to/key.pem}
graceful-wait-before-shutdown=10

Where the session-token-signing-cert and session-token-signing-key are the paths to the certs generated in the 1st step.

And then start the TiDB cluster with the config.toml.

  1. Update the proxy.toml of TiProxy:
[proxy]
    pd-addrs = "127.0.0.1:2379"

Where the pd-addrs contains the addresses of all PD instances.

And then start TiProxy:

bin/tiproxy --config=conf/proxy.toml
  1. Connect to TiProxy with your client. The default port is 6000:
mysql -h127.0.0.1 -uroot -P6000
from https://github.com/pingcap/tiproxy
--------------------------------------------- 

https://github.com/pingcap/dm

https://github.com/pingcap/docs-dm