You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by GitBox <gi...@apache.org> on 2021/11/26 05:30:59 UTC

[GitHub] [shardingsphere] wsm12138 opened a new issue #13811: java.lang.IndexOutOfBoundsException: readerIndex(1) + length(4) exceeds writerIndex(1): PooledSlicedByteBuf(ridx: 1, widx: 1, cap: 1/1, unwrapped: PooledUnsafeDirectByteBuf(ridx: 11, widx: 19, cap: 2048))

wsm12138 opened a new issue #13811:
URL: https://github.com/apache/shardingsphere/issues/13811


   ## Bug Report
   
   **For English only**, other languages will not accept.
   
   Before report a bug, make sure you have:
   
   - Searched open and closed [GitHub issues](https://github.com/apache/shardingsphere/issues).
   - Read documentation: [ShardingSphere Doc](https://shardingsphere.apache.org/document/current/en/overview).
   
   Please pay attention on issues you submitted, because we maybe need more details. 
   If no response anymore and we cannot reproduce it on current information, we will **close it**.
   
   Please answer these questions before submitting your issue. Thanks!
   
   ### Which version of ShardingSphere did you use?
   5.0.1-SNAPSHOT
   master  
   commit 654c876aff05c8d51261e9ffb8d46f884fb685c7
   
   ### Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?
   ShardingSphere-Proxy
   ### Expected behavior
   without error
   ### Actual behavior
   ```
   [ERROR] 2021-11-25 21:46:41.736 [epollEventLoopGroup-3-5] o.a.s.p.f.n.FrontendChannelInboundHandler - Exception occur:
   java.lang.IndexOutOfBoundsException: readerIndex(1) + length(4) exceeds writerIndex(4): PooledSlicedByteBuf(ridx: 1, widx: 4, cap: 4/4, unwrapped: PooledUnsafeDirectByteBuf(ridx: 7, widx: 19, cap: 2048))
   	at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1442)
   	at io.netty.buffer.AbstractByteBuf.readIntLE(AbstractByteBuf.java:817)
   	at org.apache.shardingsphere.db.protocol.mysql.payload.MySQLPacketPayload.readInt4(MySQLPacketPayload.java:115)
   	at org.apache.shardingsphere.db.protocol.mysql.packet.handshake.MySQLHandshakeResponse41Packet.<init>(MySQLHandshakeResponse41Packet.java:56)
   	at org.apache.shardingsphere.proxy.frontend.mysql.authentication.MySQLAuthenticationEngine.authPhaseFastPath(MySQLAuthenticationEngine.java:88)
   	at org.apache.shardingsphere.proxy.frontend.mysql.authentication.MySQLAuthenticationEngine.authenticate(MySQLAuthenticationEngine.java:75)
   	at org.apache.shardingsphere.proxy.frontend.netty.FrontendChannelInboundHandler.authenticate(FrontendChannelInboundHandler.java:80)
   	at org.apache.shardingsphere.proxy.frontend.netty.FrontendChannelInboundHandler.channelRead(FrontendChannelInboundHandler.java:72)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
   	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
   	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
   	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
   	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432)
   	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
   	at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
   	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
   	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
   	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
   	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
   	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
   	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
   	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
   	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
   	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
   	at java.lang.Thread.run(Thread.java:748)
   [ERROR] 2021-11-25 21:46:41.745 [epollEventLoopGroup-3-5] o.a.s.p.f.n.FrontendChannelInboundHandler - Exception occur:
   java.lang.IndexOutOfBoundsException: readerIndex(1) + length(4) exceeds writerIndex(1): PooledSlicedByteBuf(ridx: 1, widx: 1, cap: 1/1, unwrapped: PooledUnsafeDirectByteBuf(ridx: 11, widx: 19, cap: 2048))
   	at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1442)
   	at io.netty.buffer.AbstractByteBuf.readIntLE(AbstractByteBuf.java:817)
   	at org.apache.shardingsphere.db.protocol.mysql.payload.MySQLPacketPayload.readInt4(MySQLPacketPayload.java:115)
   	at org.apache.shardingsphere.db.protocol.mysql.packet.handshake.MySQLHandshakeResponse41Packet.<init>(MySQLHandshakeResponse41Packet.java:56)
   	at org.apache.shardingsphere.proxy.frontend.mysql.authentication.MySQLAuthenticationEngine.authPhaseFastPath(MySQLAuthenticationEngine.java:88)
   	at org.apache.shardingsphere.proxy.frontend.mysql.authentication.MySQLAuthenticationEngine.authenticate(MySQLAuthenticationEngine.java:75)
   	at org.apache.shardingsphere.proxy.frontend.netty.FrontendChannelInboundHandler.authenticate(FrontendChannelInboundHandler.java:80)
   	at org.apache.shardingsphere.proxy.frontend.netty.FrontendChannelInboundHandler.channelRead(FrontendChannelInboundHandler.java:72)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
   	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
   	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
   	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
   	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432)
   	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
   	at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
   	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
   	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
   	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
   	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
   	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
   	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
   	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
   	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
   	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
   	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
   	at java.lang.Thread.run(Thread.java:748)
   ```
   ### Reason analyze (If you can)
   
   ### Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc.
   - https://hub.fastgit.org/pingcap/benchmarksql.git
   - 5.0-mysql-support-opt-2.1
   ### Example codes for reproduce this issue (such as a github link).
   # BenchmarkSQL with ShardingSphere-Proxy 
   
   ## BenchmarkSQL
   https://hub.fastgit.org/pingcap/benchmarksql.git
   5.0-mysql-support-opt-2.1
   
   ## MySQL
   Version:5.7.26  Port:3306 
   
   ## ShardingSphere-Proxy
   https://github.com.cnpmjs.org/apache/shardingsphere.git
   master 
   commit 654c876aff05c8d51261e9ffb8d46f884fb685c7
   
   ## About ShardingSphere-Proxy 
   
   ### server.yaml
   ```
   rules:
     - !AUTHORITY
       users:
         - root@%:root
         - sharding@:sharding
       provider:
         type: NATIVE
   
   
   props:
     #max-connections-size-per-query: 1
     #executor-size: 16  # Infinite by default.
     #proxy-frontend-flush-threshold: 128  # The default value is 128.
     #proxy-opentracing-enabled: false
     #proxy-hint-enabled: false
     #sql-show: false
     #check-table-metadata-enabled: false
     #lock-wait-timeout-milliseconds: 50000 # The maximum time to wait for a lock
       # Proxy backend query fetch size. A larger value may increase the memory usage of ShardingSphere Proxy.
       # The default value is -1, which means set the minimum value for different JDBC drivers.
     #proxy-backend-query-fetch-size: 200
     #check-duplicate-table-enabled: false
     proxy-frontend-executor-size: 200 # Proxy frontend executor size. The default value is 0, which means let Netty decide.
       # Available options of proxy backend executor suitable: OLAP(default), OLTP. The OLTP option may reduce time cost of writing packets to client, but it may increase the latency of SQL xecution
       # if client connections are more than proxy-frontend-netty-executor-size, especially executing slow SQL.
     proxy-backend-executor-suitable: OLTP
   ```
   
   ### config-sharding.yaml
   ```
   schemaName: proxy_tpcc
   dataSources:
     ds_0:
       url: jdbc:mysql://IP.10.21:3306/tpcc0
       username: root
       password: Passwd
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 200
       minPoolSize: 0
     ds_1:
       url: jdbc:mysql://IP.10.21:3306/tpcc1
       username: root
       password: Passwd
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 200
       minPoolSize: 0
     ds_2:
       url: jdbc:mysql://IP.10.21:3306/tpcc2
       username: root
       password: Passwd
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 200
       minPoolSize: 0
     ds_3:
       url: jdbc:mysql://IP.10.21:3306/tpcc3
       username: root
       password: Passwd
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 200
       minPoolSize: 0
     ds_4:
       url: jdbc:mysql://IP.10.21:3306/tpcc4
       username: root
       password: Passwd
       connectionTimeoutMilliseconds: 3000
       idleTimeoutMilliseconds: 60000
       maxLifetimeMilliseconds: 1800000
       maxPoolSize: 200
       minPoolSize: 0
   
   rules:
     - !SHARDING
       bindingTables:
       #  - bmsql_warehouse, bmsql_customer
       #  - bmsql_stock, bmsql_district, bmsql_order_line
         -  bmsql_district, bmsql_order_line
       defaultDatabaseStrategy:
         none: null
       defaultTableStrategy:
         none: null
       keyGenerators:
         snowflake:
           props:
             worker-id: 123
           type: SNOWFLAKE
       shardingAlgorithms:
         ds_bmsql_customer_inline:
           props:
             algorithm-expression: ds_${c_id % 5}
           type: INLINE
         ds_bmsql_district_inline:
           props:
             algorithm-expression: ds_${d_w_id % 5}
           type: INLINE
         ds_bmsql_history_inline:
           props:
             algorithm-expression: ds_${h_w_id % 5}
           type: INLINE
         ds_bmsql_item_inline:
           props:
             algorithm-expression: ds_${i_id % 5}
           type: INLINE
         ds_bmsql_new_order_inline:
           props:
             algorithm-expression: ds_${no_w_id % 5}
           type: INLINE
         ds_bmsql_oorder_inline:
           props:
             algorithm-expression: ds_${o_w_id % 5}
           type: INLINE
         ds_bmsql_order_line_inline:
           props:
             algorithm-expression: ds_${ol_w_id % 5}
           type: INLINE
         ds_bmsql_stock_inline:
           props:
             algorithm-expression: ds_${s_w_id % 5}
           type: INLINE
         ds_bmsql_warehouse_inline:
           props:
             algorithm-expression: ds_${w_id % 5}
           type: INLINE
   
         t_bmsql_item:
           type: INLINE
           props:
             algorithm-expression: bmsql_item_${i_im_id % 2}
         t_bmsql_order_line:
           type: INLINE
           props:
             algorithm-expression: bmsql_order_line_${ol_number % 6}
   
   
       tables:
         bmsql_config:
           actualDataNodes: ds_0.bmsql_config
         bmsql_customer:
           actualDataNodes: ds_${0..4}.bmsql_customer
           # tableStrategy:
           #   standard:
           #     shardingColumn: c_id
           #     shardingAlgorithmName: t_bmsql_customer
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_customer_inline
               shardingColumn: c_id
         bmsql_district:
           actualDataNodes: ds_${0..4}.bmsql_district
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_district_inline
               shardingColumn: d_w_id
         bmsql_history:
           actualDataNodes: ds_${0..4}.bmsql_history
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_history_inline
               shardingColumn: h_w_id
         bmsql_item:
           actualDataNodes: ds_${0..4}.bmsql_item_${0..1}
           tableStrategy:
             standard:
               shardingColumn: i_im_id
               shardingAlgorithmName: t_bmsql_item
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_item_inline
               shardingColumn: i_id
         bmsql_new_order:
           actualDataNodes: ds_${0..4}.bmsql_new_order
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_new_order_inline
               shardingColumn: no_w_id
         bmsql_oorder:
           actualDataNodes: ds_${0..4}.bmsql_oorder
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_oorder_inline
               shardingColumn: o_w_id
         bmsql_order_line:
           actualDataNodes: ds_${0..4}.bmsql_order_line_${0..5}
           tableStrategy:
             standard:
               shardingColumn: ol_number
               shardingAlgorithmName: t_bmsql_order_line
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_order_line_inline
               shardingColumn: ol_w_id
         bmsql_stock:
           actualDataNodes: ds_${0..4}.bmsql_stock
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_stock_inline
               shardingColumn: s_w_id
         bmsql_warehouse:
           actualDataNodes: ds_${0..4}.bmsql_warehouse
           databaseStrategy:
             standard:
               shardingAlgorithmName: ds_bmsql_warehouse_inline
               shardingColumn: w_id
   ```
   ### bin/start.sh
   ```
   SERVER_NAME=ShardingSphere-Proxy
   
   cd `dirname $0`
   cd ..
   DEPLOY_DIR=`pwd`
   
   LOGS_DIR=${DEPLOY_DIR}/logs
   if [ ! -d ${LOGS_DIR} ]; then
       mkdir ${LOGS_DIR}
   fi
   
   STDOUT_FILE=${LOGS_DIR}/stdout.log
   EXT_LIB=${DEPLOY_DIR}/ext-lib
   
   CLASS_PATH=.:${DEPLOY_DIR}/lib/*:${EXT_LIB}/*
   
   JAVA_OPTS=" -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true "
   
   JAVA_MEM_OPTS=" -server -Xmx16g -Xms16g -Xmn8g -Xss1m -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 "
   
   MAIN_CLASS=org.apache.shardingsphere.proxy.Bootstrap
   
   print_usage() {
       echo "usage: start.sh [port] [config_dir]"
       echo "  port: proxy listen port, default is 3307"
       echo "  config_dir: proxy config directory, default is conf"
       exit 0
   }
   
   if [ "$1" == "-h" ] || [ "$1" == "--help" ] ; then
       print_usage
   fi
   
   echo "Starting the $SERVER_NAME ..."
   
   if [ $# == 0 ]; then
       CLASS_PATH=${DEPLOY_DIR}/conf:${CLASS_PATH}
   fi
   
   if [ $# == 1 ]; then
       MAIN_CLASS=${MAIN_CLASS}" "$1
       echo "The port is $1"
       CLASS_PATH=${DEPLOY_DIR}/conf:${CLASS_PATH}
   fi
   
   if [ $# == 2 ]; then
       MAIN_CLASS=${MAIN_CLASS}" "$1" "$2
       echo "The port is $1"
       echo "The configuration path is $DEPLOY_DIR/$2"
       CLASS_PATH=${DEPLOY_DIR}/$2:${CLASS_PATH}
   fi
   
   echo "The classpath is ${CLASS_PATH}"
   
   nohup java ${JAVA_OPTS} ${JAVA_MEM_OPTS} -classpath ${CLASS_PATH} ${MAIN_CLASS} >> ${STDOUT_FILE} 2>&1 &
   sleep 1
   echo "Please check the STDOUT file: $STDOUT_FILE"
   ```
   
   ## About BenchmarkSQL 
   ### runDatabaseBuild.sh
   ```
   #!/bin/sh
   
   echo $(date "+%Y-%m-%d %H:%M:%S")
   
   if [ $# -lt 1 ] ; then
       echo "usage: $(basename $0) PROPS [OPT VAL [...]]" >&2
       exit 2
   fi
   
   PROPS="$1"
   shift
   if [ ! -f "${PROPS}" ] ; then
       echo "${PROPS}: no such file or directory" >&2
       exit 1
   fi
   DB="$(grep '^db=' $PROPS | sed -e 's/^db=//')"
   
   BEFORE_LOAD="tableCreates"
   #AFTER_LOAD="indexCreates foreignKeys extraHistID buildFinish"
   AFTER_LOAD="indexCreates buildFinish"
   for step in ${BEFORE_LOAD} ; do
       ./runSQL.sh "${PROPS}" $step
   done
   
   ./runLoader.sh "${PROPS}" $*
   
   for step in ${AFTER_LOAD} ; do
       ./runSQL.sh "${PROPS}" $step
   done
   
   echo $(date "+%Y-%m-%d %H:%M:%S")
   
   ### props.proxy_mysql
   db=mysql
   driver=com.mysql.jdbc.Driver
   conn=jdbc:mysql://IP.10.25:3307/proxy_tpcc?serverTimezone=UTC&useSSL=false&cachePrepStmts=true&prepStmtCacheSize=8000
   user=root
   password=root
   
   warehouses=200
   loadWorkers=200
   
   terminals=200
   //To run specified transactions per terminal- runMins must equal zero
   runTxnsPerTerminal=0
   //To run for specified minutes- runTxnsPerTerminal must equal zero
   runMins=10
   //Number of total transactions per minute
   limitTxnsPerMin=0
   
   //Set to true to run in 4.x compatible mode. Set to false to use the
   //entire configured database evenly.
   terminalWarehouseFixed=true
   
   //The following five values must add up to 100
   //The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
   newOrderWeight=45
   paymentWeight=43
   orderStatusWeight=4
   deliveryWeight=4
   stockLevelWeight=4
   
   // Directory name to create for collecting detailed result data.
   // Comment this out to suppress.
   resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
   osCollectorScript=./misc/os_collector_linux.py
   osCollectorInterval=1
   //osCollectorSSHAddr=user@dbhost
   osCollectorDevices=net_eth0 blk_sda
   ```
   
   ## About MySQL 
   ### my.cnf
   ```
   [mysqld]
   server_id=13306
   port =3306
   basedir=/usr/local/mysql5.7
   datadir=/data/mysql/mysql3306/data
   log-error=/data/mysql/mysql3306/data/mysql.err
   log_bin=/data/mysql/mysql3306/data/mysql-bin
   gtid-mode=on
   enforce-gtid-consistency=true
   log-slave-updates=1
   character_set_server = utf8mb4
   pid-file=/data/mysql/mysql3306/data/mysql.pid
   socket=/tmp/mysql3306.sock
   max_connections=50000
   expire_logs_days=1
   
   innodb_buffer_pool_size=8000000000
   ###
   innodb-log-file-size=2000000000
   innodb-log-files-in-group=3
   innodb-flush-log-at-trx-commit=0
   innodb-change-buffer-max-size=40
   back_log=900
   #innodb_io_capacity
   #innodb_io_capacity_max
   innodb_max_dirty_pages_pct=75
   innodb_open_files=20480
   innodb_buffer_pool_instances=8
   innodb_page_cleaners=8
   innodb_purge_threads=2
   innodb_read_io_threads=8
   innodb_write_io_threads=8
   table_open_cache=102400
   #binlog_expire_logs_seconds=43200
   binlog_format=mixed
   log_timestamps=system
   thread_cache_size=16384
   ```
   
   
   ```
   ### ds 准备
   create database tpcc0 ;
   create database tpcc1 ;
   create database tpcc2 ;
   create database tpcc3 ;
   create database tpcc4 ;
   ```
   
   ## BenchmarkSQL do
   ```
   ./runDatabaseDestroy.sh props.proxy_mysql
   time ./runDatabaseBuild.sh  props.proxy_mysql > 200test &
   ```
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@shardingsphere.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org