English
SphereEx-Boot
/
Quick Start

Product Instructions

What is SphereEx-Boot?

SphereEx-Boot tool is a command line tool based on Python to facilitate the management of ShardingSphere-Proxy clusters. The main functions of SphereEx-Boot are to install, uninstall, start, stop, view the running status, and other operations on ShardingSphere-Proxy.

Keywords

  • Manager node: the physical machine installed with SphereEx-Boot tool is called the manager node.
  • Worker node: the physical machine that installs ShardingSphere-Proxy and zookeeper is called worker node.

Advantages

Quick & easy implementation Quickly get started with ShardingSphere-Proxy. Use the SphereEx-Boot tool to get started with ShardingSphere-Proxy. With the SphereEx-Boot tool, you can run any ShardingSphere-Proxy cluster component with just one command line.

Simple operation and maintenance SphereEx-Boot tool can quickly install and deploy ShardingSphere-Proxy cluster, as well as manage ShardingSphere-Proxy cluster to reduce operation and maintenance costs.

Easy to expand The provided standardized horizontal expansion function, can dynamically expand the cluster anytime and anywhere by increasing the number of data servers.

Architecture Overview

boot-architecture.png

Recommended server configuration:

NameCPU (cores)Memory (GB)Disk Capacity (GB)
SphereEx-Boot too l server4 Cores (minimum)850
ShardingSphere-Proxy8 Cores (minimum)16200
ZooKeeper8 Cores (minimum)16200

Port description:

ServerDefault portPort description
SphereEx-Boot22SSH communication port
ShardingSphere-Proxy3307ShardingSphere-Proxy boot port
ZooKeeper2181Zookeeper boot port

Quick Start

This section describes how to install and use the SphereEx-Boot tool.

Installation Preparation

Operating System

The manager node where SphereEx-Boot tool is located and the worker node where ShardingSphere-Proxy is located currently support Linux mainstream distribution systems (such as CentOS 7. X, Ubuntu 16 +, etc.).

Note: You can run the command cat /proc/version to view the current operating system version information.

Manager Node

Ensure that the following software is installed on the manager node:

  • sshpass 1.0.0+
  • Python 2.7 or Python 3.5+
  • pip 20.0.0+
  • JDK 1.8+

Worker Node

Ensure that the following software is installed on the worker node:

  • sshpass 1.0.0+
  • Python 2.7 or Python 3.5+
  • JDK 1.8+

Install SphereEx-Boot

Installing SphereEx-Boot Online

Run the following command to install SphereEx-Boot.

Bash
$ curl -sSL https://download.sphere-ex.com/boot/install.sh | bash ############################################# 100.0% Processing ./spex-0.1.0.tar.gz Preparing metadata (setup.py) ... done …… Successfully install spex.....

Installing SphereEx-Boot Offline

Download the SphereEx-Boot at the following link. After the download is completed, run the following command to install.

Bash
$ pip install spex-0.1.0.tar.gz

If pip is not installed, you can unzip spex-0.1.0.tar.gz installation package, then enter the decompression directory and run the following command to install.

Bash
$ pip install spex-0.1.0.tar.gz

Confirm SphereEx-Boot Successful Install

Run the spex --help command to confirm whether the installation was successful. The following output confirms a successful install.

Bash
$ spex --help Usage: spex [OPTIONS] COMMAND [ARGS]... Spex is a command line management tool for managing ShardingSphere-Proxy clusters Options: --version Version of spex --help Show this message and exit. Commands: cluster Cluster management, such as install, start, stop and uninstall config Cluster configuration management

Quickly Build a Sample Cluster

This section guides users on how to quickly set up a local sample cluster using SphereEx-Boot.

Server Preparation

  • Worker node IP: 127.0.0.1
  • Login account: root
  • Password: root

Note: The above should be replaced with your own IP address, login account and password.

Prerequisite

The manager node and the worker node need to login with SSH account and password through sshpass for mutual trust authorization.

Verify whether the manager node can log in to the worker node with an account and password.

Bash
$ ssh root@127.0.0.1 [root@centos71 .ssh]# ssh root@127.0.0.1 Last login: Tue Dec 21 15:33:32 2021 from 127.0.0.1

Operation

  1. Create a cluster named demo.
Bash
$ mkdir demo $ cd demo $ spex cluster init --name demo --download all $ ls -l total 126672 -rw-r--r-- 1 spex-demo 48M 12 9 14:46 apache-shardingsphere-5.0.0-shardingsphere-proxy-bin.tar.gz -rw-r--r-- 1 spex-demo 12M 12 9 14:46 apache-zookeeper-3.6.3-bin.tar.gz -rw-r--r-- 1 spex-demo 741B 12 9 14:47 cluster-config.yaml drwxr-xr-x 9 spex-demo 288B 12 9 14:54 conf -rw-r--r-- 1 spex-demo 984K 12 9 14:47 mysql-connector-java-5.1.47.jar -rw-r--r-- 1 spex-demo 1.1K 12 9 14:47 zoo.cfg
  1. Add the cluster configuration file to the SphereEx-Boot tool management environment.
Bash
$ spex config add -f cluster-config.yaml
  1. Install the demo cluster.
Bash
$ spex cluster install --name demo Operation ShardingSphere-Proxy check proxy install dir exist! Completed....... Operation ShardingSphere-Proxy create install directory 127.0.0.1 : 3307 => success install proxy 127.0.0.1 : 3307 => success copying shell file 127.0.0.1 : 3307 => success copying config file 127.0.0.1 : 3307 => success copying agent config file skipped host:127.0.0.1 item : None copying depend file 127.0.0.1 : 3307 => success Completed......
  1. Start the demo cluster.
Bash
$ spex cluster start --name demo Operation ShardingSphere-Proxy start proxy 127.0.0.1 : 3307 => success The port is 3307 The classpath is /root/shardingsphere-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/conf:.:/root/sharding-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/lib/*:/root/sharding-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/ext-lib/* Please check the STDOUT file: /root/sharding-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/logs/stdout.log Completed......
  1. View the demo cluster running status.
Bash
$ spex cluster status --name demo Operation ShardingSphere-Proxy proxy status 127.0.0.1 : 3307 => success PID:6355 PORT:3307 %CPU:20.6 %MEM:10.1 START:00:33 TIME:0:03 Results summary +--------------+------+------+------+------+-------+------+ | HOST | PORT | PID | %CPU | %MEM | START | TIME | +--------------+------+------+------+------+-------+------+ | 127.0.0.1 | 3307 | 6355 | 20.6 | 10.1 | 00:33 | 0:03 | +--------------+------+------+------+------+-------+------+ Completed...... Operation ZooKeeper zookeeper status 127.0.0.1 : 2181 => success /usr/bin/java Client port found: 2181. Client address: localhost. Client SSL: false. Mode: standalone ZooKeeper JMX enabled by default Using config: /root/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg Results summary +--------------+------+------------------+ | HOST | PORT | STATUS | +--------------+------+------------------+ | 127.0.0.1 | 2181 | Mode: standalone | +--------------+------+------------------+ Completed......
  1. Uninstall the demo cluster.

To uninstall the installed demo cluster, run the command $ spex cluster uninstall --name demo to uninstall the corresponding cluster.

Bash
$ spex cluster uninstall --name demo Are you sure to uninstall demo cluster ? [y/N]: y Operation ShardingSphere-Proxy stop proxy 127.0.0.1 : 3307 => success ShardingSphere-Proxy does not started! remove install directory 127.0.0.1 : 3307 => success Completed...... Operation ZooKeeper stop zookeeper 127.0.0.1 : 2181 => success /usr/bin/java Stopping zookeeper ... no zookeeper to stop (could not find file /root/zookeeper/data/zookeeper_server.pid) ZooKeeper JMX enabled by default Using config: /root/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg remove zookeeper data directory 127.0.0.1 : 2181 => success remove zookeeper install directory 127.0.0.1 : 2181 => success Completed......

Uninstall SphereEx-Boot

To delete or uninstall an existing SphereEx-Boot, refer to the following steps.

Run the command pip uninstall spex to uninstall SphereEx-Boot.

Bash
$ pip uninstall spex Found existing installation: spex 0.1.0 Uninstalling spex-0.1.0: Would remove: /usr/local/bin/spex /usr/local/lib/python3.6/site-packages/spex-0.1.0-py3.6.egg-info /usr/local/lib/python3.6/site-packages/src/* Proceed (Y/n)? y Successfully uninstalled spex-0.1.0

User Guide

Overview

SphereEx-Boot tool is a command line tool based on Python. Its main function is to install and deploy ShardingSphere-Proxy. You can install, uninstall, start, stop, view running status and other operations on ShardingSphere-Proxy. The physical machine that installs the SphereEx-Boot tool is called manager node, and the physical machine that installs the ShardingSphere-Proxy is called worker node. The SphereEx-Boot tool currently supports mainstream Linux systems.

Environment Preparation

Manager node software configuration

Software
sshpass 1.0+
Python 2.7 / Python 3.5+
pip 20.0.0+
JDK 1.8+

Worker node software configuration

Software
sshpass 1.0+
Python 2.7 / Python 3.5+
JDK 1.8+

Install SphereEx-Boot

Installing SphereEx-Boot Online

  1. Run the command curl -sSL https://download.sphere-ex.com/boot/install.sh | bash to install SphereEx-Boot.
[root@centos71 ~]# curl -sSL https://download.sphere-ex.com/boot/install.sh | bash ############################################# 100.0% Processing ./spex-0.1.0.tar.gz Preparing metadata (setup.py) ... done …… Successfully install spex.....

Installing SphereEx-Boot Offline

1.Download the SphereEx-Boot at the following link: https://download.sphere-ex.com/boot/spex-0.1.0.tar.gz. After the download is completed, run the following command to install.

[root@centos71 ~]# pip install spex-0.1.0.tar.gz Preparing metadata (setup.py) ... done Requirement already satisfied: ansible<=2.10.7,>=2.8.0 in /usr/local/lib/python3.6/site-packages (from spex==0.1.0) (2.10.7) …… Installing collected packages: spex Attempting uninstall: spex Found existing installation: spex 0.1.0 Uninstalling spex-0.1.0: Successfully uninstalled spex-0.1.0 Running setup.py install for spex ... done Successfully installed spex-0.1.0

2.After the installation is completed, run the command spex --version to view the version.

[root@centos71 ~]# pip --version pip 21.3.1 from /usr/local/lib/python3.6/site-packages/pip (python 3.6)

View SphereEx-Boot Help

  1. You can use the --help parameter to confirm SphereEx-Boot's command and subcommand help information.
  • Example: view SphereEx-Boot's help information.
[root@centos71 demo]# spex --help Usage: spex [OPTIONS] COMMAND [ARGS]... Spex is a command line management tool for managing ShardingSphere-Proxy clusters Options: --version Version of spex --help Show this message and exit. Commands: cluster Cluster management, such as install, start, stop and uninstall config Cluster configuration management
  • Example: view SphereEx-Boot's cluster help information.
[root@centos71 demo]# spex cluster --help Usage: spex cluster [OPTIONS] COMMAND [ARGS]... Cluster management, such as install, start, stop and uninstall Options: --help Show this message and exit. Commands: download Download ShardingSphere-Proxy, Zookeeper, Database driver... init Quickly initialization a cluster configuration --proxy-host can... install Install cluster of ShardingSphere-Proxy or zookeeper list List already added clusters scale Scale cluster of ShardingSphere-Proxy It can be scale out... start Start cluster of ShardingSphere-Proxy or zookeeper status Status cluster of ShardingSphere-Proxy or ZooKeeper stop Stop cluster of ShardingSphere-Proxy or zookeeper uninstall Uninstall cluster of ShardingSphere-Proxy or zookeeper
  • Example: view SphereEx-Boot config help information.
[root@centos71 demo]# spex config --help Usage: spex config [OPTIONS] COMMAND [ARGS]... Cluster configuration management Options: --help Show this message and exit. Commands: add Add cluster environment. check Check the cluster configuration file you can use --file or... delete Delete cluster configuration info Show cluster configuration content template Show cluster configuration template.

Using SphereEx-Boot

Cluster Topology Profile Operation

Cluster Topology Profile Description

When deploying a cluster through SphereEx-Boot, you need to provide a cluster topology configuration file in yaml format. The configuration data is as follows:

  • cluster_name: the name of the cluster
  • install_user: the user name when logging into the worker node
  • install_password: the user password when logging into the worker node
  • proxy: ShardingSphere-Proxy configuration
    • version: ShardingSphere-Proxy's version identification
    • file: installation package file path of ShardingSphere-Proxy's manager node
    • conf_dir: service profile directory of ShardingSphere-Proxy's manager node
    • depend_files: driver jar package file path of ShardingSphere-Proxy's manager node
    • install_dir: deployment directory of ShardingSphere-Proxy's worker node
    • port: startup port ShardingSphere-Proxy's worker node
    • overwrite: If the worker node installation directory already exists, it will reinstall it.
    • servers: the information list of worker node
      • host: the IP address of worker node
      • port: the startup port of ShardingSphere-Proxy's worker node (not necessary, if not configured, the configuration in proxy shall prevail).
      • install_dir: installation directory of ShardingSphere-Proxy's worker node (not necessary, if not configured, the configuration in proxy shall prevail).
      • agent_conf_file:agent configuration file path of ShardingSphere-Proxy's manager node. (not necessary, if not configured, the configuration in proxy shall prevail)
      • overwrite: If the worker node installation directory already exists, it will reinstall it. (The parameter overwrite is not necessary, if not configured, the configuration in proxy shall prevail).
  • zookeeper: ZooKeeper's configuration (if ZooKeeper is not required, it can not be configured)
    • version:ZooKeeper's version identification
    • file: the installation file path of ZooKeeper's manager node
    • conf_file: the configuration file path of manager node ZooKeeper zoo.cfg
    • install_dir:the installation directory of Zookeeper's worker node
    • data_dir: dataDir configuration value in configuration file zoo.cfg of ZooKeeper's worker node
    • port: the startup port of ZooKeeper's worker node
    • overwrite: If the worker node installation directory already exists, it will reinstall it.
    • servers: list of ZooKeeper
      • host: IP address of worker node
      • myid: myid value of ZooKeeper cluster
      • port: the startup port ZooKeeper's of worker node (not necessary, if not configured, the configuration in ZooKeeper shall prevail)
      • install_dir: the installation directory of ZooKeeper's worker node (not necessary, if not configured, the configuration in ZooKeeper shall prevail)
      • conf_file: the zoo.cfg configuration file path Zookeeper's manager node (not necessary, if not configured, the configuration in ZooKeeper shall prevail)
      • data_dir: the configuration value of dataDir in configuration file zoo.cfg on ZooKeeper's worker node ZooKeeper (not necessary, if not configured, the configuration in ZooKeeper shall prevail)
      • overwrite: If the worker node installation directory already exists, it will reinstall it. (The parameter overwrite is not necessary, if not configured, the configuration in ZooKeeper shall prevail)

Cluster topology configuration example:

cluster_name: demo install_user: root install_password: 'root' proxy: version: '5.0.0' install_dir: /opt/shardingsphere-proxy conf_dir: /root/demo/conf file: /root/demo/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin.tar.gz depend_files: - /root/demo/mysql-connector-java-5.1.47.jar port: 3307 overwrite: true servers: - host: 10.0.1.1 zookeeper: version: '3.6.3' install_dir: /opt/zookeeper data_dir: /tmp/zookeeper conf_file: /root/demo/zoo.cfg file: /root/demo/apache-zookeeper-3.6.3-bin.tar.gz port: 2181 overwrite: true servers: - host: 10.0.1.1 myid: 1

Cluster Topology File Initialization

  1. Export cluster topology configuration file.
  • Run the following command to generate a cluster topology template named cluster-template.yaml in current directory. Configure your actual configuration data according to the cluster topology description.
$ spex config template --type full --output ./
  • Run the command spex cluster init to initialize cluster configuration file and content.

Check the Cluster Topology File

  1. Run the command spex config check -f <cluster-file> to check the configuration of the specified cluster topology file.

Example: Check the cluster topology file named cluster-template.yaml.

[root@centos71 demo]# spex config check -f cluster-config.yaml Proxy are no errors Zookeeper are no errors

Add Cluster Topology Information

  1. Run the command spex config add to add the cluster topology configuration file to SphereEx-Boot to manage. After adding, SphereEx-Boot can manage the cluster through the cluster name.
[root@centos71 demo]# spex config add -f cluster-config.yaml Successfully add cluster

View the Added Cluster Topology

  1. Run the command spex cluster list to view the added cluster topology. Currently, there are two cluster topologies, demo and demo1.
[root@centos71 demo]# spex cluster list +--------------+ | Cluster Name | +--------------+ | demo | | demo1 | +--------------+

Delete Cluster Topology

  1. Run spex config delete <cluster-name> to remove the specified cluster topology from SphereEx-Boot which will not affect ShardingSphere-Proxy and ZooKeeper on the worker node.

Example: remove a cluster named demo.

[root@centos71 demo]# spex config delete demo Operation ShardingSphere-Proxy check proxy install dir exist! 10.0.1.1 : 3307 /demo/shardingsphere-proxy is existence! Completed...... Operation ZooKeeper check ZooKeeper install dir exist! 10.0.1.1 : 2181 /demo/zookeeper/ is existence! Completed...... Are you sure to delete configuration of demo? [y/N]: y Completed.......

View Cluster Topology Content

1.Run the command spex config info --name <cluster-name> to view the specified cluster topology content.

Example: view the contents of the cluster topology named demo.

[root@centos71 demo]# spex config info --name demo proxy +--------------+-------------+------+----------------------------+ | install_user | host | port | install_dir | +--------------+-------------+------+----------------------------+ | root | 10.0.1.1 | 3307 | /demo/shardingsphere-proxy | +--------------+-------------+------+----------------------------+ zookeeper +--------------+-------------+------+------+------------------+----------------------+ | install_user | host | port | myid | install_dir | data_dir | +--------------+-------------+------+------+------------------+----------------------+ | root | 10.0.1.1 | 2181 | 1 | /demo/zookeeper/ | /demo/zookeeper/data | +--------------+-------------+------+------+------------------+----------------------+

2.Run the command spex config info --name <cluster-name>--detail to view the detailed configuration of the specified cluster topology.

[root@centos71 demo]# spex config info --name demo --detail cluster_name: demo install_user: root install_password: root proxy: version: '1.0' file: /root/demo/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin.tar.gz conf_dir: /root/demo/conf agent_conf_file: depend_files: - /root/demo/mysql-connector-java-5.1.47.jar install_dir: /demo/shardingsphere-proxy port: 3307 overwrite: false servers: - host: 10.0.1.1 zookeeper: version: '1.0' file: /root/demo/apache-zookeeper-3.6.3-bin.tar.gz conf_file: /root/demo/zoo.cfg install_dir: /demo/zookeeper/ data_dir: /demo/zookeeper/data port: 2181 overwrite: false servers: - host: 10.0.1.1 myid: 1

Install Cluster

Requirements

The manager node and the worker node need to log in with SSH account and password through sshpass for mutual trust authorization.

Environment Preparation

Run the command spex cluster download to download all installation packages, including:

  • ZooKeeper
  • ShardingSphere-Proxy
  • MySQL Driver

Operation

  1. Cluster topology file initialization. Please refer to Cluster Topology File Initialization for details.
  2. Check the cluster topology file. Please refer to Check the Cluster Topology File for details.
  3. Add a cluster topology file. Please refer to Add Cluster Topology Information for details.
  4. Run the command spex cluster install --name <cluster-name> to install the cluster.

Example: install the added cluster named demo.

[root@centos71 demo]# spex cluster install --name demo Operation ShardingSphere-Proxy check proxy install dir exist! Completed...... Operation ShardingSphere-Proxy create install directory 10.0.1.1 : 3307 => success install proxy 10.0.1.1 : 3307 => success copying shell file 10.0.1.1 : 3307 => success copying config file 10.0.1.1 : 3307 => success copying agent config file skipped host : 10.0.1.1 item : None copying depend file 10.0.1.1 : 3307 => success Completed...... Operation ZooKeeper check ZooKeeper install dir exist! Completed...... Operation ZooKeeper create ZooKeeper install directory 10.0.1.1 : 2181 => success create ZooKeeper data directory 10.0.1.1 : 2181 => success install ZooKeeper 10.0.1.1 : 2181 => success copy ZooKeeper config file 10.0.1.1 : 2181 => success create myid 10.0.1.1 : 2181 => success Completed......

Note: After updating the cluster configuration file, you must run the command spex config add to update the new cluster topology configuration file information to SphereEx-Boot, and then continue the installation.

Start Cluster

  1. Run the command spex cluster start --name <cluster-name> to start the cluster.

Example: start the cluster named demo.

[root@centos71 demo]# spex cluster start --name demo Operation ZooKeeper start ZooKeeper 10.0.1.1 : 2181 => success /usr/bin/java Starting zookeeper ... STARTED ZooKeeper JMX enabled by default Using config: /demo/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg Completed...... Operation ShardingSphere-Proxy start proxy 10.0.1.1 : 3307 => success The port is 3307 The classpath is /demo/shardingsphere-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/conf:.:/demo1/shardingsphere-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/lib/*:/demo1/shardingsphere-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/ext-lib/* Please check the STDOUT file: /demo/shardingsphere-proxy/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/logs/stdout.log Completed......

View Cluster Status

  1. Run the command spex cluster status --name <cluster-name> to view the cluster running status.

Example: view the running status of a cluster named demo.

[root@centos71 demo]# spex cluster status --name demo Operation ShardingSphere-Proxy proxy status 10.0.1.1 : 3307 => success PID:14014 PORT:3307 %CPU:16.9 %MEM:0.2 START:12:05 TIME:0:01 Results summary +-------------+------+-------+------+------+-------+------+ | HOST | PORT | PID | %CPU | %MEM | START | TIME | +-------------+------+-------+------+------+-------+------+ | 10.0.1.1 | 3307 | 14014 | 16.9 | 0.2 | 12:05 | 0:01 | +-------------+------+-------+------+------+-------+------+ Completed...... Operation ZooKeeper ZooKeeper status 10.0.1.1 : 2181 => success /usr/bin/java Client port found: 2181. Client address: localhost. Client SSL: false. Mode: standalone ZooKeeper JMX enabled by default Using config: /demo/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg Results summary +-------------+------+------------------+ | HOST | PORT | STATUS | +-------------+------+------------------+ | 10.0.1.1 | 2181 | Mode: standalone | +-------------+------+------------------+ Completed......

Stop Cluster

  1. Run the command spex cluster stop --name <cluster-name> to stop a cluster.

Example: stop the cluster named demo.

[root@centos71 demo]# spex cluster stop --name demo Operation ShardingSphere-Proxy stop proxy 10.0.1.1 : 3307 => success Stopping the ShardingSphere-Proxy STOPED PID:9157 PORT:3307 Completed...... Operation ZooKeeper stop ZooKeeper 10.0.1.1 : 2181 => success /usr/bin/java Stopping zookeeper ... STOPPED ZooKeeper JMX enabled by default Using config: /demo/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg Completed......

Uninstall Cluster

  1. Run the command spex cluster uninstall --name <cluster-name> to unistall a cluster, which will delete the deployment directory in the worker node.

Example: uninstall a cluster named demo.

[root@centos71 demo]# spex cluster uninstall --name demo Are you sure to uninstall demo cluster ? [y/N]: y Operation ShardingSphere-Proxy stop proxy 10.0.1.1 : 3307 => success Stopping the ShardingSphere-Proxy STOPED PID:14014 PORT:3307 remove install directory 10.0.1.1 : 3307 => success Completed...... Operation ZooKeeper stop ZooKeeper 10.0.1.1 : 2181 => success /usr/bin/java Stopping zookeeper ... STOPPED ZooKeeper JMX enabled by default Using config: /demo/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg remove ZooKeeper data directory 10.0.1.1 : 2181 => success remove ZooKeeper install directory 10.0.1.1 : 2181 => success Completed......

Scale out Cluster

1.Run the command spex cluster scale --name <cluster-name> --host <host-ip> to scale out a cluster.

Example: in demo cluster, scale out the ShardingSphere-Proxy whose IP address is 10.0.1.2.

$ spex cluster scale --name demo --host 10.0.1.2

2.Run the command spex cluster install -n demo --type proxy --host 10.0.1.2 to add a new node.

[root@community /]# spex cluster install -n demo --type proxy --host 10.0.1.2 ShardingSphere-Proxy check proxy install dir exist! Completed....... ShardingSphere-Proxy create install directory 10.0.1.2:3388 =>success install proxy 10.0.1.2:3388 =>success copying shell file 10.0.1.2:3388 =>success copying config file 10.0.1.2:3388 =>success copying agent config file skipped host: 10.0.1.2 item:None copying depend file 10.0.1.2:3388 =>success Completed.......

3.Run the command spex cluster start --name demo --type proxy --host 10.0.1.2 to start a new node.

[root@community /]# spex cluster start --name demo --type proxy --host 10.0.1.2 ShardingSphere-Proxy start proxy 10.0.1.2:3388 =>success Starting the ShardingSphere-Proxy ... The port is 3388 The classpath is /root/demo/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/conf:.:/root/demo/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/lib/*:/root/demo/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/ext-lib/* Please check the STDOUT file: /root/demo/apache-shardingsphere-5.0.0-shardingsphere-proxy-bin/logs/stdout.log Completed.......

Scale in Cluster

  1. Run the command spex cluster stop --name <cluster-name> --host <host-ip> to stop the specified node.

Example: in demo cluster, scale in the ShardingSphere-Proxy whose IP address is 10.0.1.2.

[root@centos71 ~]# spex cluster stop --name demo --type proxy --host 10.0.1.2 Operation ShardingSphere-Proxy stop proxy 10.0.1.2 : 3388 => success Stopping the ShardingSphere-Proxy STOPED PID:29550 PORT:3388 Completed......

FAQ

How to configure Python environment variables?

When installing Python, you need to configure Python's bin directory into the PATH environment variable. This allows you to install the SphereEx-Boot tool using the pip in the installed Python.

For example: export PATH=/usr/local/python3/bin:$PATH is appended to end of ~/.bashrc file. Executing source ~/.bashrc validates the environment variable.

How to view dependent software versions?

  • Run the command sshpass -V to view the sshpass version.
Bash
$ sshpass -V sshpass 1.06
  • Run the command python -V to view the Python version.
Bash
$ python -V Python 3.6.8
  • Run the command pip --version to view the pip version.
Bash
$ pip --version pip 21.3.1 from /usr/local/lib/python3.6/site-packages/pip (python 3.6)
  • If the pip version is outdated, you can use the command pip install --upgrade pip to upgrade pip. If the upgrade is unsuccessful, please reinstall pip.

How to install pip package?

  • Python3 environment install pip

wget https://bootstrap.pypa.io/get-pip.py

python get-pip.py or python3 get-pip.py

  • Python2 environment install pip

wget https://bootstrap.pypa.io/pip/2.7/get-pip.py

python get-pip.py

How to configure SSH mutual trust?

Set secret key for passwordless login (if you have set secret key passwordless login, you can skip the following steps)

  1. Generate secret key (If id_rsa.pub saved in ~/.ssh/ directory, you can skip this step.)
Bash
$ ssh-keygen -t rsa
  1. Set intercommunications using the command ssh-copy-id -i ~/.ssh/id_rsa.pub <user>@<host-ip>.
Bash
$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@127.0.0.1

Products
Distributed Data Service Platform
SphereEx-Boot
Social
wechat qrcode

扫码关注
微信公众号

© 2022 SphereEx. All Rights Reserved.