You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by GitBox <gi...@apache.org> on 2020/05/14 08:22:37 UTC
[GitHub] [shardingsphere] henrypoter opened a new issue #5598: sharding-proxy process disappear without any error log
henrypoter opened a new issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598
## Question
I use sharding-scaling and sharding-proxy to sync data from source db to destination db.
I deploy them on a centos os.
The proxy process disappear after few hours. However the proxy didn't write any exception/error log when it shutdown. It may shutdown at about 00:11
### Below is the log write by proxy:
`[INFO ] 23:39:52.487 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 23:39:52.513 [epollEventLoopGroup-3-6] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[INFO ] 23:40:18.755 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x9b706394, L:/127.0.0.1:3307 - R:/127.0.0.1:33576]
[INFO ] 23:40:18.755 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:40:32.191 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x4b045b9d, L:/127.0.0.1:3307 - R:/127.0.0.1:33580]
[INFO ] 23:40:32.191 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:40:35.754 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xe8639430, L:/127.0.0.1:3307 - R:/127.0.0.1:33582]
[INFO ] 23:40:35.755 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:41:08.411 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x8f4c86b4, L:/127.0.0.1:3307 - R:/127.0.0.1:33590]
[INFO ] 23:41:08.412 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 23:41:08.436 [epollEventLoopGroup-3-4] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[INFO ] 23:41:19.858 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xa17ac133, L:/127.0.0.1:3307 - R:/127.0.0.1:33596]
[INFO ] 23:41:19.859 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 23:41:27.031 [epollEventLoopGroup-3-3] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[INFO ] 23:41:27.069 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x120c715c, L:/127.0.0.1:3307 - R:/127.0.0.1:33598]
[INFO ] 23:41:27.070 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:41:49.629 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xe53cc33e, L:/127.0.0.1:3307 - R:/127.0.0.1:33608]
[INFO ] 23:41:49.629 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 23:41:51.645 [epollEventLoopGroup-3-5] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[INFO ] 23:41:51.808 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xdb41a9fe, L:/127.0.0.1:3307 - R:/127.0.0.1:33610]
[INFO ] 23:41:51.825 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:08:31.111 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xf9c2d943, L:/127.0.0.1:3307 - R:/127.0.0.1:33664]
[INFO ] 00:08:31.236 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:09:25.133 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x2aadfd43, L:/127.0.0.1:3307 - R:/127.0.0.1:33670]
[INFO ] 00:09:25.133 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:09:36.643 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x062d6eec, L:/127.0.0.1:3307 - R:/127.0.0.1:33672]
[INFO ] 00:09:36.643 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:10:11.152 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xdca104a6, L:/127.0.0.1:3307 - R:/127.0.0.1:33678]
[INFO ] 00:10:11.153 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:10:32.341 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xcedbf185, L:/127.0.0.1:3307 - R:/127.0.0.1:33682]
[INFO ] 00:10:32.359 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:11:05.253 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x452fea96, L:/127.0.0.1:3307 - R:/127.0.0.1:33694]
[INFO ] 00:11:05.295 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:11:15.723 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x7fa56ceb, L:/127.0.0.1:3307 - R:/127.0.0.1:33696]
[INFO ] 00:11:15.724 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:11:18.771 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0x2f4cebbd, L:/127.0.0.1:3307 - R:/127.0.0.1:33700]
[INFO ] 00:11:18.771 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:11:23.773 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xc4e2b963, L:/127.0.0.1:3307 - R:/127.0.0.1:33704]
[INFO ] 00:11:23.804 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 00:11:24.216 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ: [id: 0xc52c7927, L:/127.0.0.1:3307 - R:/127.0.0.1:33706]
[INFO ] 00:11:24.216 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x1d62fc97, L:/0.0.0.0:3307] READ COMPLETE
`
### Below is the log write by scaling:
`
[INFO ] 23:15:42.438 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 23:17:43.599 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ: [id: 0xf387f65d, L:/192.168.11.182:8888 - R:/192.168.11.123:56116]
[INFO ] 23:17:43.600 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 23:19:11.134 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ: [id: 0x26e88e39, L:/192.168.11.182:8888 - R:/192.168.11.123:56404]
[INFO ] 23:19:11.135 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 23:19:23.870 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ: [id: 0x8bf98155, L:/192.168.11.182:8888 - R:/192.168.11.123:56443]
[INFO ] 23:19:23.870 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 23:23:47.433 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ: [id: 0x7ddc9c86, L:/192.168.11.182:8888 - R:/192.168.11.123:57308]
[INFO ] 23:23:47.434 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0727ce75, L:/0.0.0.0:8888] READ COMPLETE
[WARN ] 00:12:32.837 [pool-1-thread-22] c.zaxxer.hikari.pool.ProxyConnection - HikariPool-8 - Connection com.mysql.jdbc.JDBC4Connection@ea4b023 marked as broken because of SQLSTATE(08S01), ErrorCode(0)
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,116 milliseconds ago. The last packet sent successfully to the server was 1,114 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:101)
at com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:144)
at com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:174)
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 18 common frames omitted
[WARN ] 00:12:32.830 [pool-1-thread-19] c.zaxxer.hikari.pool.ProxyConnection - HikariPool-8 - Connection com.mysql.jdbc.JDBC4Connection@5429e4a0 marked as broken because of SQLSTATE(08S01), ErrorCode(0)
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 849 milliseconds ago. The last packet sent successfully to the server was 849 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3014)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 18 common frames omitted
[WARN ] 00:12:32.830 [pool-1-thread-17] c.zaxxer.hikari.pool.ProxyConnection - HikariPool-8 - Connection com.mysql.jdbc.JDBC4Connection@697ae574 marked as broken because of SQLSTATE(08S01), ErrorCode(0)
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 823 milliseconds ago. The last packet sent successfully to the server was 823 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3014)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 18 common frames omitted
[ERROR] 00:12:33.341 [pool-1-thread-19] o.a.s.s.c.e.e.SyncTaskExecuteCallback - HistoryDataSyncTask history-biz_suite-t_scheduling_time#9 execute exception exit
org.apache.shardingsphere.shardingscaling.core.exception.SyncTaskExecuteException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 849 milliseconds ago. The last packet sent successfully to the server was 849 milliseconds ago.
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:90)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 849 milliseconds ago. The last packet sent successfully to the server was 849 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
... 6 common frames omitted
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3014)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 18 common frames omitted
[ERROR] 00:12:33.341 [pool-1-thread-17] o.a.s.s.c.e.e.SyncTaskExecuteCallback - HistoryDataSyncTask history-biz_suite-t_scheduling_time#8 execute exception exit
org.apache.shardingsphere.shardingscaling.core.exception.SyncTaskExecuteException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 823 milliseconds ago. The last packet sent successfully to the server was 823 milliseconds ago.
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:90)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 823 milliseconds ago. The last packet sent successfully to the server was 823 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
... 6 common frames omitted
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3014)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 18 common frames omitted
[ERROR] 00:12:33.342 [pool-1-thread-22] o.a.s.s.c.e.e.SyncTaskExecuteCallback - HistoryDataSyncTask history-biz_suite-t_scheduling_time#0 execute exception exit
org.apache.shardingsphere.shardingscaling.core.exception.SyncTaskExecuteException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,116 milliseconds ago. The last packet sent successfully to the server was 1,114 milliseconds ago.
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:90)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,116 milliseconds ago. The last packet sent successfully to the server was 1,114 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
... 6 common frames omitted
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:101)
at com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:144)
at com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:174)
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 18 common frames omitted
[INFO ] 00:12:33.378 [pool-1-thread-19] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-biz_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 00:12:33.379 [pool-1-thread-17] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-biz_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 00:12:33.379 [pool-1-thread-22] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-biz_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 00:12:33.457 [pool-1-thread-17] com.zaxxer.hikari.HikariDataSource - HikariPool-8 - Shutdown initiated...
[INFO ] 00:12:33.457 [pool-1-thread-22] com.zaxxer.hikari.HikariDataSource - HikariPool-7 - Shutdown initiated...
[INFO ] 00:12:33.457 [pool-1-thread-19] com.zaxxer.hikari.HikariDataSource - HikariPool-6 - Shutdown initiated...
[INFO ] 00:12:33.840 [pool-1-thread-19] com.zaxxer.hikari.HikariDataSource - HikariPool-6 - Shutdown completed.
[INFO ] 00:12:33.865 [pool-1-thread-22] com.zaxxer.hikari.HikariDataSource - HikariPool-7 - Shutdown completed.
[INFO ] 00:12:37.696 [pool-1-thread-17] com.zaxxer.hikari.HikariDataSource - HikariPool-8 - Shutdown completed.
`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629603631
It seems like a connection leak. Can you re-run the job now and try to reproduce it? I want to know whether this problem can be reproduced. At the same time, I will try to find out the root cause.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter closed issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter closed issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-630002492
I have completed the sync job.
And the proxy not crashed after I change the jvm Xmx to 1g. It may be the OOM caused the problem, but not sure.
I will try it again later.
Thanks for your help.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 edited a comment on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 edited a comment on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629059574
What's the JVM options for rocketmq? I want to know the memory oppucied by rocketmq.
Anyway, -Xmx2g(proxy) + -Xmx2g(scaling) already exceed the memory of your host. We don't include the os + rocketmq memory yet. The memory is allocated over capacity, and may cause OutOfMemoryError.
I suggest to deploy the proxy and scaling separately on two hosts. Or adjust the -Xmx to **1g or lower** for each of proxy and scaling.
After that, let's see whether your sync can be done. If the problem still exists, I'll provide further instructions to find the root cause.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629051357
> > sharding-proxy process disappear without any error log
>
> @henrypoter
> It's probably caused by an OutOfMemoryError.
> I want to exclude the environment factor at first. Please provide the information below:
>
> 1. The topology of your deploy. How many machines? And where are the sharding-proxy/sharding-scaling/MySQL deployed?
> 2. The hardware and software description in detail. Include at least CPU, memory, any software(including operating system) versions.
> 3. The JVM options of the instance of sharding-proxy and sharding-scaling. At least include memory options.
> 4. Other instances in every machine. At least include instance which require large memory.
> 5. Configurations of sharding-proxy and sharding-scaling. (You can mask ip address and password if necessary.)
> 6. How much data you want to sync in total? And how much completed before the proxy crashed?
>
> Please answer these questions straightforwardly and rawly without any analysis or process. Thanks.
## 1
I deploy proxy and scaling on a single centos server. And start with the default start.sh script.
On the centos, I deployed a rocketmq, but only few message sent to it.
MySQL server is deployed seperatly.
## 2
system info :
[rocketmq@rocketmq182 ~]$ free -hm
total used free shared buffers cached
Mem: 3.7G 3.6G 117M 156K 50M 61M
-/+ buffers/cache: 3.5G 229M
Swap: 2.0G 846M 1.1G`
[rocketmq@rocketmq182` ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Stepping: 4
CPU MHz: 2100.000
BogoMIPS: 4200.00
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-3
[rocketmq@rocketmq182 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_rocketmq179-LogVol01
28G 5.6G 21G 22% /
tmpfs 1.9G 344K 1.9G 1% /dev/shm
/dev/sda1 190M 75M 105M 42% /boot
## 3
`rocketmq 2110 1 4 May14 ? 00:59:29 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -server -Xmx2g -Xms2g -Xmn1g -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/conf:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/lib/* org.apache.shardingsphere.shardingscaling.Bootstrap
rocketmq 2144 1 14 May14 ? 02:57:14 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -server -Xmx2g -Xms2g -Xmn1g -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:..:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/ext-lib/* org.apache.shardingsphere.shardingproxy.Bootstrap
rocketmq 3438 3413 0 14:00 pts/1 00:00:00 grep sharding`
## 4
None
## 5
### proxy
[rocketmq@rocketmq182 conf]$ cat server.yaml
authentication:
users:
root:
password: root
sharding:
password: sharding
authorizedSchemas: hc_hiwork_suite_new,hc_hiwork_suite_hzm
props:
max.connections.size.per.query: 16
acceptor.size: 16 # The default value is available processors count * 2.
executor.size: 16 # Infinite by default.
sql.show: false
### proxy sharding rule
[rocketmq@rocketmq182 conf]$ cat config-sharding.yaml
schemaName: biz_suite_hzm
dataSources:
ds_0:
url: jdbc:mysql://192.168.1.2:3306/biz_suite_hzm?serverTimezone=UTC&useSSL=false
username: ***
password: ****
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 100
shardingRule:
tables:
t_hc_suite_scheduling_time:
actualDataNodes: ds_0.t_scheduling_time_${0..15}
tableStrategy:
inline:
shardingColumn: department_id
algorithmExpression: t_scheduling_time_${department_id % 16}
defaultDatabaseStrategy:
none:
defaultTableStrategy:
none:
### scaling
[rocketmq@rocketmq182 conf]$ cat server.yaml
port: 8888
blockQueueSize: 10000
pushTimeout: 1000
workerThread: 30
##6
About 5000w rows for sync.
1000w+ synced before proxy shutdown. It all depends.
Thanks for you help.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629059574
What's the JVM options for rocketmq? I want to know the memory oppucied by rocketmq.
Anyway, -Xmx2g(proxy) + -Xmx2g(scaling) already exceed the memory of your host. We don't include the os + rocketmq memory yet. The memory is allocated over capacity, and may cause OutOfMemoryError.
I suggest to deploy the proxy and scaling separately on two hosts. Or adjust the -Xmx to **1g or lower** for each of proxy and scaling.
After that, let's see whether your sync can be done. If the problem still exists, I'll provide further instruction to find the root cause.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629115449
> What's the JVM options for rocketmq? I want to know the memory oppucied by rocketmq.
>
> Anyway, -Xmx2g(proxy) + -Xmx2g(scaling) already exceed the memory of your host. We don't include the os + rocketmq memory yet. The memory is allocated over capacity, and may cause OutOfMemoryError.
>
> I suggest to deploy the proxy and scaling separately on two hosts. Or adjust the -Xmx to **1g or lower** for each of proxy and scaling.
>
> After that, let's see whether your sync can be done. If the problem still exists, I'll provide further instructions to find out the root cause.
`[rocketmq@rocketmq182 bin]$ ps -ef | grep names
rocketmq 1655 1 0 May14 ? 00:00:00 sh /opt/rocketmq/bin/mqnamesrv
rocketmq 1683 1655 0 May14 ? 00:00:00 sh /opt/rocketmq/bin/runserver.sh org.apache.rocketmq.namesrv.NamesrvStartup
rocketmq 1691 1683 0 May14 ? 00:05:07 /usr/local/jdk1.8.0_151/bin/java -server -Xms1g -Xmx1g -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8 -XX:-UseParNewGC -verbose:gc -Xloggc:/dev/shm/rmq_srv_gc_%p_%t.log -XX:+PrintGCDetails -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m -XX:-OmitStackTraceInFastThrow -XX:-UseLargePages -Djava.ext.dirs=/usr/local/jdk1.8.0_151/jre/lib/ext:/opt/rocketmq/bin/../lib -cp .:/opt/rocketmq/bin/../conf:.:/usr/local/jdk1.8.0_151/lib:/usr/local/jdk1.8.0_151/jre/lib: org.apache.rocketmq.namesrv.NamesrvStartup
rocketmq 3810 3491 0 16:48 pts/0 00:00:00 grep names
[rocketmq@rocketmq182 bin]$ ps -ef | grep broke
rocketmq 1676 1 0 May14 ? 00:00:00 sh /opt/rocketmq/bin/mqbroker -c /opt/rocketmq/conf/dledger-broker-a/broker-n2.conf
rocketmq 1685 1676 0 May14 ? 00:00:00 sh /opt/rocketmq/bin/runbroker.sh org.apache.rocketmq.broker.BrokerStartup -c /opt/rocketmq/conf/dledger-broker-a/broker-n2.conf
rocketmq 1693 1685 12 May14 ? 03:36:45 /usr/local/jdk1.8.0_151/bin/java -server -Xms1g -Xmx1g -Xmn1g -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0 -verbose:gc -Xloggc:/dev/shm/rmq_broker_gc_%p_%t.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch -XX:MaxDirectMemorySize=15g -XX:-UseLargePages -XX:-UseBiasedLocking -Djava.ext.dirs=/usr/local/jdk1.8.0_151/jre/lib/ext:/opt/rocketmq/bin/../lib -cp .:/opt/rocketmq/bin/../conf:.:/usr/local/jdk1.8.0_151/lib:/usr/local/jdk1.8.0_151/jre/lib: org.apache.rocketmq.broker.BrokerStartup -c /opt/rocketmq/conf/dledger-broker-a/broker-n2.conf
rocketmq 3812 3491 0 16:48 pts/0 00:00:00 grep broke
[rocketmq@rocketmq182 bin]$ `
I will try again..
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629598889
I run only scaling and proxy on the centos again. I changed the jvm to -Xmx1024m -Xms1024m. And add HeapDumpOnOutOfMemoryError jvm parameters.
This time, the sync job paused, but the proxy process is running.
Below is the process info ,memory info and the final logs.
### processs
ps -ef | grep sharding
rocketmq 7966 1 3 08:24 pts/4 00:13:57 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/rocketmq -server -Xmx1024m -Xms1024m -Xmn512m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/conf:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/lib/* org.apache.shardingsphere.shardingscaling.Bootstrap
rocketmq 8034 1 11 08:25 pts/4 00:41:49 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/rocketmq -server -Xmx1024m -Xms1024m -Xmn512m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:..:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/ext-lib/* org.apache.shardingsphere.shardingproxy.Bootstrap
### memory
> [rocketmq@rocketmq182 logs]$ free -mh
total used free shared buffers cached
Mem: 3.7G 2.0G 1.7G 264K 124M 224M
-/+ buffers/cache: 1.7G 2.1G
Swap: 2.0G 11M 1.9G
### proxy log
[INFO ] 10:26:17.723 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:36.378 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0xed086b27, L:/127.0.0.1:3307 - R:/127.0.0.1:45782]
[INFO ] 10:26:36.378 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:36.885 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0x0f4bd237, L:/127.0.0.1:3307 - R:/127.0.0.1:45784]
[INFO ] 10:26:36.886 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:37.810 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0x9efc3203, L:/127.0.0.1:3307 - R:/127.0.0.1:45786]
[INFO ] 10:26:37.811 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:47.156 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0x9c748244, L:/127.0.0.1:3307 - R:/127.0.0.1:45790]
[INFO ] 10:26:47.156 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 10:45:49.691 [epollEventLoopGroup-3-4] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
### scaling log
[INFO ] 08:54:46.889 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x149d857a, L:/0.0.0.0:8888] READ COMPLETE
[WARN ] 10:45:49.645 [Thread-1] c.zaxxer.hikari.pool.ProxyConnection - HikariPool-4 - Connection com.mysql.jdbc.JDBC4Connection@d7b6169 marked as broken because of SQLSTATE(08S01), ErrorCode(0)
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 7,875,080 milliseconds ago. The last packet sent successfully to the server was 7,875,068 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2011)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.synctask.history.HistoryDataSyncTask.getEstimatedRows(HistoryDataSyncTask.java:94)
at org.apache.shardingsphere.shardingscaling.core.synctask.history.HistoryDataSyncTask.start(HistoryDataSyncTask.java:82)
at org.apache.shardingsphere.shardingscaling.core.synctask.history.HistoryDataSyncTaskGroup.start(HistoryDataSyncTaskGroup.java:167)
at org.apache.shardingsphere.shardingscaling.core.controller.task.SyncTaskController.run(SyncTaskController.java:96)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection timed out (Read failed)
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:101)
at com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:144)
at com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:174)
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 14 common frames omitted
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 edited a comment on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 edited a comment on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629059574
What's the JVM options for rocketmq? I want to know the memory oppucied by rocketmq.
Anyway, -Xmx2g(proxy) + -Xmx2g(scaling) already exceed the memory of your host. We don't include the os + rocketmq memory yet. The memory is allocated over capacity, and may cause OutOfMemoryError.
I suggest to deploy the proxy and scaling separately on two hosts. Or adjust the -Xmx to **1g or lower** for each of proxy and scaling.
After that, let's see whether your sync can be done. If the problem still exists, I'll provide further instructions to find out the root cause.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629599764
The scaling job start script:
> curl -X POST \
http://localhost:8888/shardingscaling/job/start \
-H 'content-type: application/json' \
-d '{
"ruleConfiguration": {
"sourceDatasource": "biz_suite: !!org.apache.shardingsphere.orchestration.core.configuration.YamlDataSourceConfiguration\n dataSourceClassName: com.zaxxer.hikari.HikariDataSource\n properties:\n jdbcUrl: jdbc:mysql://ip:3306/biz_suite?serverTimezone=UTC&useSSL=false\n username: user1\n password: '\''123456'\''\n connectionTimeout: 30000\n idleTimeout: 60000\n maxLifetime: 1800000\n maxPoolSize: 10\n minPoolSize: 1\n maintenanceIntervalMilliseconds: 30000\n readOnly: false\n",
"sourceRule": "tables:\n t_deposit_modify_log:\n actualDataNodes: biz_suite.t_deposit_modify_log\n t_scheduling_time:\n actualDataNodes: biz_suite.t_scheduling_time\n t_order_limit:\n actualDataNodes: biz_suite.t_order_limit\n t_course:\n actualDataNodes: biz_suite.t_course\n",
"destinationDataSources": {
"name": "biz_suite_hzm",
"password": "sharding",
"url": "jdbc:mysql://localhost:3307/biz_suite_hzm?autoReconnect=true&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false",
"username": "sharding"
}
},
"jobConfiguration": {
"concurrency": 8
}
}'
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629008530
> sharding-proxy process disappear without any error log
It's probably caused by an OutOfMemoryError.
I want to exclude the environment factor at first. Please provide the information below:
1. The topology of your deploy. How many machines? And where are the sharding-proxy/sharding-scaling/MySQL deployed?
2. The hardware and software description in detail. Include at least CPU, memory, any software(including operating system) versions.
3. The JVM options of the instance of sharding-proxy and sharding-scaling. At least include memory options.
4. Other instances in every machine. At least include instance which require large memory.
5. Configurations of sharding-proxy and sharding-scaling. (You can mask ip address and password if necessary.)
6. How much data you want to sync in total? And how much completed before the proxy crashed?
Please answer these questions straightforwardly and rawly without any analysis or process. Thanks.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-628994981
### I tried again.
### proxy log
`[INFO ] 23:26:26.254 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0x9ab2eaed, L:/127.0.0.1:3307 - R:/127.0.0.1:44824]
[INFO ] 23:26:26.254 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:26:35.216 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0xc459e4a7, L:/127.0.0.1:3307 - R:/127.0.0.1:44828]
[INFO ] 23:26:35.217 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:26:40.130 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0x732b2d12, L:/127.0.0.1:3307 - R:/127.0.0.1:44830]
[INFO ] 23:26:40.130 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 23:26:40.169 [epollEventLoopGroup-3-6] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[INFO ] 23:26:42.615 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0x7f6810a3, L:/127.0.0.1:3307 - R:/127.0.0.1:44836]
[INFO ] 23:26:42.615 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 23:26:56.954 [epollEventLoopGroup-3-7] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
[INFO ] 23:26:57.077 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0xd5a1cdf5, L:/127.0.0.1:3307 - R:/127.0.0.1:44842]
[INFO ] 23:26:57.078 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:27:31.375 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0x33b6efbd, L:/127.0.0.1:3307 - R:/127.0.0.1:44848]
[INFO ] 23:27:31.376 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 23:35:45.624 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ: [id: 0x190bdd95, L:/127.0.0.1:3307 - R:/127.0.0.1:44852]
[INFO ] 23:35:45.625 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xa3539356, L:/0.0.0.0:3307] READ COMPLETE
line 1:0 no viable alternative at input 'KILL'
line 1:0 mismatched input 'KILL' expecting {SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, DROP, TRUNCATE, GRANT, REVOKE, SET, WITH, DESC, BEGIN, COMMIT, ROLLBACK, SAVEPOINT, CALL, USE, DESCRIBE, SHOW, REPLACE, START, CHANGE, RENAME, REPAIR, STOP}
[ERROR] 23:35:45.687 [ShardingSphere-Command-13] o.a.s.s.f.c.CommandExecutorTask - Exception occur:
org.apache.shardingsphere.sql.parser.exception.SQLParsingException: Unsupported SQL of `KILL QUERY 121`
at org.apache.shardingsphere.sql.parser.core.parser.SQLParserExecutor.execute(SQLParserExecutor.java:49)
at org.apache.shardingsphere.sql.parser.SQLParserEngine.parse0(SQLParserEngine.java:79)
at org.apache.shardingsphere.sql.parser.SQLParserEngine.parse(SQLParserEngine.java:61)
at org.apache.shardingsphere.shardingproxy.backend.text.TextProtocolBackendHandlerFactory.newInstance(TextProtocolBackendHandlerFactory.java:69)
at org.apache.shardingsphere.shardingproxy.frontend.mysql.command.query.text.query.MySQLComQueryPacketExecutor.<init>(MySQLComQueryPacketExecutor.java:65)
at org.apache.shardingsphere.shardingproxy.frontend.mysql.command.MySQLCommandExecutorFactory.newInstance(MySQLCommandExecutorFactory.java:70)
at org.apache.shardingsphere.shardingproxy.frontend.mysql.command.MySQLCommandExecuteEngine.getCommandExecutor(MySQLCommandExecuteEngine.java:59)
at org.apache.shardingsphere.shardingproxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:92)
at org.apache.shardingsphere.shardingproxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:71)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[WARN ] 23:35:57.145 [epollEventLoopGroup-3-2] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
`
### scaling log
`[INFO ] 19:09:47.529 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x2fa1461d, L:/192.168.11.182:8888 - R:/192.168.11.123:63056]
[INFO ] 19:09:47.530 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 19:09:47.530 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x8036331a, L:/192.168.11.182:8888 - R:/192.168.11.123:63057]
[INFO ] 19:09:47.531 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[ERROR] 23:35:56.980 [pool-1-thread-10] o.a.s.s.c.e.e.SyncTaskExecuteCallback - HistoryDataSyncTask history-biz_suite-t_suite_scheduling_time#0 execute exception exit
org.apache.shardingsphere.shardingscaling.core.exception.SyncTaskExecuteException: java.sql.SQLException: 2Unknown exception: [Unsupported SQL of `KILL QUERY 121`]
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:90)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: 2Unknown exception: [Unsupported SQL of `KILL QUERY 121`]
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3978)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3914)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2491)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2449)
at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:845)
at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:745)
at com.mysql.jdbc.StatementImpl$CancelTask$1.run(StatementImpl.java:120)
[INFO ] 23:35:57.012 [pool-1-thread-10] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-biz_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 23:35:57.037 [pool-1-thread-10] com.zaxxer.hikari.HikariDataSource - HikariPool-5 - Shutdown initiated...
[INFO ] 23:35:57.161 [pool-1-thread-10] com.zaxxer.hikari.HikariDataSource - HikariPool-5 - Shutdown completed.
[INFO ] 23:35:57.162 [pool-1-thread-10] com.zaxxer.hikari.HikariDataSource - HikariPool-4 - Shutdown initiated...
[INFO ] 23:35:57.177 [pool-1-thread-10] com.zaxxer.hikari.HikariDataSource - HikariPool-4 - Shutdown completed.
[INFO ] 23:35:57.177 [pool-1-thread-10] com.zaxxer.hikari.HikariDataSource - HikariPool-3 - Shutdown initiated...
[INFO ] 23:35:57.189 [pool-1-thread-10] com.zaxxer.hikari.HikariDataSource - HikariPool-3 - Shutdown completed.
[ERROR] 23:35:57.284 [pool-1-thread-16] o.a.s.s.c.e.e.SyncTaskExecuteCallback - HistoryDataSyncTask history-biz_suite-t_suite_scheduling_time#3 execute exception exit
java.lang.NullPointerException: null
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2513)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO ] 23:35:57.285 [pool-1-thread-16] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-biz_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 09:29:35.556 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x32c6485e, L:/192.168.11.182:8888 - R:/192.168.11.123:58203]
[INFO ] 09:29:35.634 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 09:29:35.635 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x8bc55de3, L:/192.168.11.182:8888 - R:/192.168.11.123:58202]
[INFO ] 09:29:35.636 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 09:29:35.637 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x19caddaa, L:/192.168.11.182:8888 - R:/192.168.11.123:58204]
[INFO ] 09:29:35.637 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 09:29:45.892 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x70ac6510, L:/192.168.11.182:8888 - R:/192.168.11.123:58232]
[INFO ] 09:29:45.903 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 10:41:47.554 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0xfee6faa3, L:/192.168.11.182:8888 - R:/192.168.11.123:55039]
[INFO ] 10:41:47.555 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
[INFO ] 10:41:47.555 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ: [id: 0x9f2edf2a, L:/192.168.11.182:8888 - R:/192.168.11.123:55040]
[INFO ] 10:41:47.556 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xc26b991c, L:/0.0.0.0:8888] READ COMPLETE
`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter edited a comment on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter edited a comment on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629051357
> > sharding-proxy process disappear without any error log
>
> @henrypoter
> It's probably caused by an OutOfMemoryError.
> I want to exclude the environment factor at first. Please provide the information below:
>
> 1. The topology of your deploy. How many machines? And where are the sharding-proxy/sharding-scaling/MySQL deployed?
> 2. The hardware and software description in detail. Include at least CPU, memory, any software(including operating system) versions.
> 3. The JVM options of the instance of sharding-proxy and sharding-scaling. At least include memory options.
> 4. Other instances in every machine. At least include instance which require large memory.
> 5. Configurations of sharding-proxy and sharding-scaling. (You can mask ip address and password if necessary.)
> 6. How much data you want to sync in total? And how much completed before the proxy crashed?
>
> Please answer these questions straightforwardly and rawly without any analysis or process. Thanks.
## 1
I deploy proxy and scaling on a single centos server. And start with the default start.sh script.
On the centos, I deployed a rocketmq, but only few message sent to it.
MySQL server is deployed seperatly.
## 2
system info :
[rocketmq@rocketmq182 ~]$ free -hm
total used free shared buffers cached
Mem: 3.7G 3.6G 117M 156K 50M 61M
-/+ buffers/cache: 3.5G 229M
Swap: 2.0G 846M 1.1G`
[rocketmq@rocketmq182` ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Stepping: 4
CPU MHz: 2100.000
BogoMIPS: 4200.00
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-3
[rocketmq@rocketmq182 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_rocketmq179-LogVol01
28G 5.6G 21G 22% /
tmpfs 1.9G 344K 1.9G 1% /dev/shm
/dev/sda1 190M 75M 105M 42% /boot
## 3
`rocketmq 2110 1 4 May14 ? 00:59:29 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -server -Xmx2g -Xms2g -Xmn1g -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/conf:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/lib/* org.apache.shardingsphere.shardingscaling.Bootstrap
rocketmq 2144 1 14 May14 ? 02:57:14 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -server -Xmx2g -Xms2g -Xmn1g -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:..:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/ext-lib/* org.apache.shardingsphere.shardingproxy.Bootstrap
rocketmq 3438 3413 0 14:00 pts/1 00:00:00 grep sharding`
## 4
None
## 5
### proxy
[rocketmq@rocketmq182 conf]$ cat server.yaml
authentication:
users:
root:
password: root
sharding:
password: sharding
authorizedSchemas: biz_suite_hzm
props:
max.connections.size.per.query: 16
acceptor.size: 16 # The default value is available processors count * 2.
executor.size: 16 # Infinite by default.
sql.show: false
### proxy sharding rule
[rocketmq@rocketmq182 conf]$ cat config-sharding.yaml
schemaName: biz_suite_hzm
dataSources:
ds_0:
url: jdbc:mysql://192.168.1.2:3306/biz_suite_hzm?serverTimezone=UTC&useSSL=false
username: ***
password: ****
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 100
shardingRule:
tables:
t_scheduling_time:
actualDataNodes: ds_0.t_scheduling_time_${0..15}
tableStrategy:
inline:
shardingColumn: department_id
algorithmExpression: t_scheduling_time_${department_id % 16}
defaultDatabaseStrategy:
none:
defaultTableStrategy:
none:
### scaling
[rocketmq@rocketmq182 conf]$ cat server.yaml
port: 8888
blockQueueSize: 10000
pushTimeout: 1000
workerThread: 30
## 6
About 5000w rows for sync.
1000w+ synced before proxy shutdown. It all depends.
Thanks for you help.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] avalon5666 commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
avalon5666 commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629662356
@henrypoter It seem the mysql connection is disconnect when executing sql. During I check the hikari document, I find that if your driver lower than jdbc4, you have to set [connectionTestQuery](https://github.com/brettwooldridge/HikariCP) property.
The sync task will exit when executing error. I think the NPE is related to sync task exception exit.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629605914
During the process, you can monitor the socket count by following:
1. total socket status
`cat /proc/net/sockstat`
2. statistics by port
`netstat -nat|grep -i "3307"|wc -l`
3. statistics by pid
`ls /proc/<pid>/fd -l | grep socket: | wc -l`
I noticed that the last job running about 7800s. So please execute these commands above every several miniutes before the scaling is paused, and make a record. According to this record, we can determine if there's connection leak.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] tuohai666 edited a comment on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
tuohai666 edited a comment on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629008530
> sharding-proxy process disappear without any error log
@henrypoter
It's probably caused by an OutOfMemoryError.
I want to exclude the environment factor at first. Please provide the information below:
1. The topology of your deploy. How many machines? And where are the sharding-proxy/sharding-scaling/MySQL deployed?
2. The hardware and software description in detail. Include at least CPU, memory, any software(including operating system) versions.
3. The JVM options of the instance of sharding-proxy and sharding-scaling. At least include memory options.
4. Other instances in every machine. At least include instance which require large memory.
5. Configurations of sharding-proxy and sharding-scaling. (You can mask ip address and password if necessary.)
6. How much data you want to sync in total? And how much completed before the proxy crashed?
Please answer these questions straightforwardly and rawly without any analysis or process. Thanks.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter edited a comment on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter edited a comment on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-629598889
I run only scaling and proxy on the centos again. I changed the jvm to -Xmx1024m -Xms1024m. And add HeapDumpOnOutOfMemoryError jvm parameters.
This time, the sync job paused, but the proxy process is running.
Below is the process info ,memory info and the final logs.
### processs
ps -ef | grep sharding
rocketmq 7966 1 3 08:24 pts/4 00:13:57 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/rocketmq -server -Xmx1024m -Xms1024m -Xmn512m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/conf:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-scaling-bin/lib/* org.apache.shardingsphere.shardingscaling.Bootstrap
rocketmq 8034 1 11 08:25 pts/4 00:41:49 java -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/rocketmq -server -Xmx1024m -Xms1024m -Xmn512m -Xss256k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -classpath .:..:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/lib/*:/home/rocketmq/sharding/apache-shardingsphere-4.1.0-sharding-proxy-bin/ext-lib/* org.apache.shardingsphere.shardingproxy.Bootstrap
### memory
> [rocketmq@rocketmq182 logs]$ free -mh
total used free shared buffers cached
Mem: 3.7G 2.0G 1.7G 264K 124M 224M
-/+ buffers/cache: 1.7G 2.1G
Swap: 2.0G 11M 1.9G
### proxy log
[INFO ] 10:26:17.723 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:36.378 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0xed086b27, L:/127.0.0.1:3307 - R:/127.0.0.1:45782]
[INFO ] 10:26:36.378 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:36.885 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0x0f4bd237, L:/127.0.0.1:3307 - R:/127.0.0.1:45784]
[INFO ] 10:26:36.886 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:37.810 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0x9efc3203, L:/127.0.0.1:3307 - R:/127.0.0.1:45786]
[INFO ] 10:26:37.811 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[INFO ] 10:26:47.156 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ: [id: 0x9c748244, L:/127.0.0.1:3307 - R:/127.0.0.1:45790]
[INFO ] 10:26:47.156 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0xd83f976b, L:/0.0.0.0:3307] READ COMPLETE
[WARN ] 10:45:49.691 [epollEventLoopGroup-3-4] i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer
### scaling log
[INFO ] 08:54:46.888 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x149d857a, L:/0.0.0.0:8888] READ: [id: 0xc3e4b571, L:/192.168.11.182:8888 - R:/192.168.11.123:60883]
[INFO ] 08:54:46.889 [nioEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x149d857a, L:/0.0.0.0:8888] READ COMPLETE
[WARN ] 10:45:49.645 [Thread-1] c.zaxxer.hikari.pool.ProxyConnection - HikariPool-4 - Connection com.mysql.jdbc.JDBC4Connection@d7b6169 marked as broken because of SQLSTATE(08S01), ErrorCode(0)
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 7,875,080 milliseconds ago. The last packet sent successfully to the server was 7,875,068 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3462)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3905)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2683)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2495)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2011)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.synctask.history.HistoryDataSyncTask.getEstimatedRows(HistoryDataSyncTask.java:94)
at org.apache.shardingsphere.shardingscaling.core.synctask.history.HistoryDataSyncTask.start(HistoryDataSyncTask.java:82)
at org.apache.shardingsphere.shardingscaling.core.synctask.history.HistoryDataSyncTaskGroup.start(HistoryDataSyncTaskGroup.java:167)
at org.apache.shardingsphere.shardingscaling.core.controller.task.SyncTaskController.run(SyncTaskController.java:96)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection timed out (Read failed)
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:101)
at com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:144)
at com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:174)
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3472)
... 14 common frames omitted
[INFO ] 10:45:49.648 [Thread-1] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-hc_hiwork_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 10:45:49.649 [Thread-1] com.zaxxer.hikari.HikariDataSource - HikariPool-4 - Shutdown initiated...
[INFO ] 10:45:49.676 [Thread-1] com.zaxxer.hikari.HikariDataSource - HikariPool-4 - Shutdown completed.
[INFO ] 10:45:49.677 [Thread-1] com.zaxxer.hikari.HikariDataSource - HikariPool-5 - Shutdown initiated...
[INFO ] 10:45:49.694 [Thread-1] com.zaxxer.hikari.HikariDataSource - HikariPool-5 - Shutdown completed.
[INFO ] 10:45:49.694 [Thread-1] com.zaxxer.hikari.HikariDataSource - HikariPool-3 - Shutdown initiated...
[ERROR] 10:45:49.697 [pool-1-thread-10] o.a.s.s.c.e.e.SyncTaskExecuteCallback - HistoryDataSyncTask history-hc_hiwork_suite-t_hc_suite_scheduling_time#0 execute exception exit
java.lang.NullPointerException: null
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2513)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.executeInsert(AbstractJDBCWriter.java:128)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.flush(AbstractJDBCWriter.java:103)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.write(AbstractJDBCWriter.java:81)
at org.apache.shardingsphere.shardingscaling.core.execute.executor.writer.AbstractJDBCWriter.run(AbstractJDBCWriter.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO ] 10:45:49.697 [pool-1-thread-10] o.a.s.s.c.c.task.SyncTaskController - history data migrate task historyGroup-hc_hiwork_suite finished, execute result: EXCEPTION_EXIT
[INFO ] 10:45:49.707 [Thread-1] com.zaxxer.hikari.HikariDataSource - HikariPool-3 - Shutdown completed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [shardingsphere] henrypoter commented on issue #5598: sharding-proxy process disappear without any error log
Posted by GitBox <gi...@apache.org>.
henrypoter commented on issue #5598:
URL: https://github.com/apache/shardingsphere/issues/5598#issuecomment-630001746
> @henrypoter It seem the mysql connection is disconnect when executing sql. During I check the hikari document, I find that if your driver lower than jdbc4, you have to set [connectionTestQuery](https://github.com/brettwooldridge/HikariCP) property.
> The sync task will exit when executing error. I think the NPE is related to sync task exception exit.
I use mysql-connector-java-5.1.47.jar .
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org