You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/11/03 02:57:15 UTC

Build failed in Jenkins: Phoenix | Master #1470

See <https://builds.apache.org/job/Phoenix-master/1470/changes>

Changes:

[ssa] PHOENIX-3387 Hive PhoenixStorageHandler fails with join on numeric

[ssa] PHOENIX-3416 Memory leak in PhoenixStorageHandler

[ssa] PHOENIX-3408 arithmetic/mathematical operations with Decimal columns

[ssa] PHOENIX-3386 PhoenixStorageHandler throws NPE if local tasks executed

[ssa] PHOENIX-3422 PhoenixQueryBuilder doesn't make value string correctly for

[ssa] PHOENIX-3423 PhoenixObjectInspector doesn't have information on length

[jamestaylor] PHOENIX-3434 Avoid creating new Configuration in ClientAggregatePlan to

[samarth] PHOENIX-3435 Upgrade will fail for future releases because of use of

[mujtaba] Add missing Apache license

------------------------------------------
[...truncated 805946 lines...]
2016-11-03 02:48:07,001 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:07,154 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:07,220 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:07,339 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:07,361 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:08,464 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:11,465 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:14,465 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:17,012 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:17,166 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:17,234 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:17,348 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:17,372 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:17,465 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:20,466 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:23,466 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:26,467 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:27,026 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:27,181 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:27,246 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:27,359 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:27,383 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:29,467 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:31,802 DEBUG [tx-snapshot] org.apache.tephra.TransactionManager(401): Starting snapshot of transaction state with timestamp 1478141311802
2016-11-03 02:48:31,802 DEBUG [tx-snapshot] org.apache.tephra.TransactionManager(402): Returning snapshot of state: TransactionSnapshot{timestamp=1478141311802, readPointer=1478130802436000000, writePointer=1478130802436000000, invalidSize=0, inProgressSize=0, committingSize=0, committedSize=0}
2016-11-03 02:48:32,469 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:35,470 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:37,036 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:37,196 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:37,258 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:37,374 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:37,392 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:38,470 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:41,470 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:44,471 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:47,071 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:47,217 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:47,290 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:47,406 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:47,413 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:47,471 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:50,471 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:53,472 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:56,472 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:48:57,114 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:57,265 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:57,315 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:57,445 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:57,462 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:48:59,473 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:02,473 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:04,160 WARN  [snapshot-log-cleaner-cache-refresher] org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask(318): Failed to refresh snapshot hfile cache!
java.net.ConnectException: Call From priapus.apache.org/67.195.81.188 to localhost:46085 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor98.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:210)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.access$000(SnapshotFileCache.java:76)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask.run(SnapshotFileCache.java:316)
	at java.util.TimerThread.mainLoop(Timer.java:555)
	at java.util.TimerThread.run(Timer.java:505)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1446)
	... 30 more
2016-11-03 02:49:04,170 WARN  [snapshot-hfile-cleaner-cache-refresher] org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask(318): Failed to refresh snapshot hfile cache!
java.net.ConnectException: Call From priapus.apache.org/67.195.81.188 to localhost:46085 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor98.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:210)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.access$000(SnapshotFileCache.java:76)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask.run(SnapshotFileCache.java:316)
	at java.util.TimerThread.mainLoop(Timer.java:555)
	at java.util.TimerThread.run(Timer.java:505)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1446)
	... 30 more
2016-11-03 02:49:05,477 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:07,154 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:07,298 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:07,357 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:07,476 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:07,493 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:08,477 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:11,478 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:14,478 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:17,189 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:17,321 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:17,411 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:17,478 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:17,508 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:17,521 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:20,479 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:23,479 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:26,481 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:27,214 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:27,350 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:27,443 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:27,526 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:27,539 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:29,481 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:32,482 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:35,482 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:37,230 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:37,396 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:37,465 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:37,541 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:37,554 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:38,482 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:41,483 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:44,483 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:47,264 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:47,413 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:47,484 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:47,499 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:47,567 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:47,577 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:50,485 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:53,485 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:56,486 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:49:57,301 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:57,447 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:57,548 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:57,604 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:57,621 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:49:59,486 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:02,486 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:05,487 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:07,327 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:07,469 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:07,570 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:07,638 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:07,646 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:08,488 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:11,488 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:14,489 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:17,354 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:17,489 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:17,515 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:17,628 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:17,667 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:17,673 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:20,489 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:23,490 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:26,490 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:27,382 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:27,556 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:27,652 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:27,687 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:27,701 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:29,493 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:32,493 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:35,494 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-03 02:50:37,423 INFO  [B.defaultRpcServer.handler=0,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:37,577 INFO  [B.defaultRpcServer.handler=1,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:37,672 INFO  [B.defaultRpcServer.handler=3,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:37,713 INFO  [B.defaultRpcServer.handler=2,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:37,748 INFO  [B.defaultRpcServer.handler=4,queue=0,port=48545] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #765, waiting for 1  actions to finish
2016-11-03 02:50:38,494 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@2178ccd] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
2016-11-03 02:50:41,079 INFO  [Thread-2623] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-11-03 02:50:41,085 INFO  [Thread-162] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
Archiving artifacts
2016-11-03 02:50:41,093 INFO  [Thread-5459] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
Compressed 697.85 MB of artifacts by 55.3% relative to #1463
Updating PHOENIX-3423
Updating PHOENIX-3434
Updating PHOENIX-3435
Updating PHOENIX-3416
Updating PHOENIX-3408
Updating PHOENIX-3386
Updating PHOENIX-3387
Updating PHOENIX-3422
Recording test results

Jenkins build is back to normal : Phoenix | Master #1476

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1476/changes>


Build failed in Jenkins: Phoenix | Master #1475

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1475/changes>

Changes:

[jamestaylor] PHOENIX-3456 Use unique table names for MutableIndexFailureIT

[jamestaylor] PHOENIX-3457 Disable parallel run of tests and increase memory

------------------------------------------
[...truncated 676 lines...]
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.027 sec - in org.apache.phoenix.end2end.QueryWithOffsetIT
Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.452 sec - in org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.ReverseFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.557 sec - in org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Running org.apache.phoenix.end2end.ReverseScanIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.379 sec - in org.apache.phoenix.end2end.ReverseFunctionIT
Running org.apache.phoenix.end2end.RoundFloorCeilFuncIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.504 sec - in org.apache.phoenix.end2end.ReverseScanIT
Running org.apache.phoenix.end2end.SerialIteratorsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.304 sec - in org.apache.phoenix.end2end.SerialIteratorsIT
Running org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.754 sec - in org.apache.phoenix.end2end.RegexpSplitFunctionIT
Running org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.268 sec - in org.apache.phoenix.end2end.ServerExceptionIT
Running org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.32 sec - in org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Running org.apache.phoenix.end2end.SkipScanQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.112 sec - in org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Running org.apache.phoenix.end2end.SortMergeJoinIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.469 sec - in org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.SortMergeJoinMoreIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.377 sec - in org.apache.phoenix.end2end.SortMergeJoinMoreIT
Running org.apache.phoenix.end2end.SortOrderIT
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.874 sec - in org.apache.phoenix.end2end.RoundFloorCeilFuncIT
Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.659 sec - in org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Running org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.264 sec - in org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.StatementHintsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.294 sec - in org.apache.phoenix.end2end.StatementHintsIT
Running org.apache.phoenix.end2end.StddevIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.3 sec - in org.apache.phoenix.end2end.StddevIT
Running org.apache.phoenix.end2end.StoreNullsIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.58 sec - in org.apache.phoenix.end2end.StoreNullsIT
Running org.apache.phoenix.end2end.StringIT
Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.909 sec - in org.apache.phoenix.end2end.SortOrderIT
Running org.apache.phoenix.end2end.StringToArrayFunctionIT
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007a3ad9000, 199868416, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 199868416 bytes for committing reserved memory.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/hs_err_pid30565.log>
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.189 sec - in org.apache.phoenix.end2end.StringIT
Running org.apache.phoenix.end2end.SubqueryIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.354 sec - in org.apache.phoenix.end2end.StringToArrayFunctionIT

Results :

Tests run: 817, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.659 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.618 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.962 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.87 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.685 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.711 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.722 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.125 sec - in org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.926 sec - in org.apache.phoenix.end2end.DropSchemaIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.326 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.718 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.944 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.51 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.203 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.779 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.582 sec - in org.apache.phoenix.end2end.NotQueryIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.54 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.111 sec - in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.113 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ScanQueryIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.523 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.839 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.274 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.541 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.899 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.349 sec - in org.apache.phoenix.end2end.ScanQueryIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.998 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 127.658 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.617 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.944 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.492 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.149 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.66 sec - in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.847 sec - in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1358, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.189 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.711 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.205 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.392 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.094 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.789 sec - in org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.457 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.482 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.646 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.119 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.331 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.842 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.93 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Running org.apache.phoenix.execute.PartialCommitIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.349 sec - in org.apache.phoenix.execute.PartialCommitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.934 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.441 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.755 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 143.05 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 242.564 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.468 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.831 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.944 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.088 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.996 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.075 sec - in org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 304.975 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.651 sec - in org.apache.phoenix.iterate.ScannerLeaseRenewalIT

Results :

Tests run: 211, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  4.142 s]
[INFO] Phoenix Core ....................................... FAILURE [31:39 min]
[INFO] Phoenix - Flume .................................... SKIPPED
[INFO] Phoenix - Pig ...................................... SKIPPED
[INFO] Phoenix Query Server Client ........................ SKIPPED
[INFO] Phoenix Query Server ............................... SKIPPED
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix - Spark .................................... SKIPPED
[INFO] Phoenix - Hive ..................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 31:46 min
[INFO] Finished at: 2016-11-05T07:10:32+00:00
[INFO] Final Memory: 79M/1077M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) on project phoenix-core: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 969.97 MB of artifacts by 67.7% relative to #1463
Updating PHOENIX-3456
Updating PHOENIX-3457
Recording test results

Build failed in Jenkins: Phoenix | Master #1474

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1474/changes>

Changes:

[jamestaylor] PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be

------------------------------------------
[...truncated 801328 lines...]
  IndexIT.testSelectDistinctOnTableWithSecondaryImmutableIndex:422 » PhoenixIO c...
  IndexIT.testTableDescriptorPriority:1068 » PhoenixIO callTimeout=1200000, call...
  IndexIT.testUpsertAfterIndexDrop:720 » PhoenixIO callTimeout=1200000, callDura...
  LocalIndexIT.testLocalIndexRoundTrip:100 » PhoenixIO org.apache.phoenix.except...
  LocalIndexIT.testLocalIndexRoundTrip:100 » PhoenixIO org.apache.phoenix.except...
  LocalIndexIT.testLocalIndexScanWithInList:531 » PhoenixIO org.apache.phoenix.e...

Tests run: 1660, Failures: 2, Errors: 23, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.AggregateQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.946 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.735 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.292 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.2 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.576 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.617 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.104 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.506 sec - in org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.93 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.296 sec - in org.apache.phoenix.end2end.DropSchemaIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.174 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.576 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 196.385 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.675 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 195.197 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.004 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.422 sec - in org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 140.832 sec - in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.691 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 319.563 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.SequenceIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.525 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.399 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.485 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.716 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 192.701 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.245 sec - in org.apache.phoenix.end2end.ScanQueryIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.845 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.455 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.914 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 248.104 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 265.305 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.061 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 237.556 sec - in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 304.368 sec - in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1358, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.246 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.501 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.615 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.323 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.171 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.002 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 140.915 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.659 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.146 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.108 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.65 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 219.8 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.282 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.635 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 465.135 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.225 sec - in org.apache.phoenix.execute.PartialCommitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.235 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.474 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.67 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.053 sec <<< FAILURE! - in org.apache.phoenix.end2end.RenewLeaseIT
org.apache.phoenix.end2end.RenewLeaseIT  Time elapsed: 0.053 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
Failed after attempts=35, exceptions:
Sat Nov 05 05:59:10 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=26, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=28, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:14 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=30, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:17 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=32, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:20 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 05:59:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=36, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:30 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=38, waitTime=2004, operationTimeout=2000 expired.
Sat Nov 05 05:59:42 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=40, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:54 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=42, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:00:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=44, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:00:18 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=46, waitTime=2002, operationTimeout=2000 expired.
Sat Nov 05 06:00:40 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=48, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 06:01:02 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=50, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=52, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:46 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=54, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=56, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:31 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=58, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:02:53 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=60, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:15 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=62, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:37 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=64, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:04:00 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=66, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:22 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=68, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:44 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=70, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=72, waitTime=2008, operationTimeout=2000 expired.
Sat Nov 05 06:05:28 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=74, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:50 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=76, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=78, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:34 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=80, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:57 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=82, waitTime=2006, operationTimeout=2000 expired.
Sat Nov 05 06:07:19 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=84, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:07:41 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=86, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:03 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=88, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:25 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=90, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:47 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=92, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:09:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.

	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: 
Failed after attempts=35, exceptions:
Sat Nov 05 05:59:10 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=26, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=28, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:14 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=30, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:17 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=32, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:20 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 05:59:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=36, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:30 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=38, waitTime=2004, operationTimeout=2000 expired.
Sat Nov 05 05:59:42 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=40, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:54 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=42, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:00:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=44, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:00:18 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=46, waitTime=2002, operationTimeout=2000 expired.
Sat Nov 05 06:00:40 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=48, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 06:01:02 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=50, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=52, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:46 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=54, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=56, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:31 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=58, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:02:53 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=60, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:15 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=62, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:37 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=64, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:04:00 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=66, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:22 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=68, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:44 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=70, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=72, waitTime=2008, operationTimeout=2000 expired.
Sat Nov 05 06:05:28 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=74, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:50 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=76, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=78, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:34 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=80, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:57 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=82, waitTime=2006, operationTimeout=2000 expired.
Sat Nov 05 06:07:19 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=84, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:07:41 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=86, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:03 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=88, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:25 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=90, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:47 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=92, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:09:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.

	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)
Caused by: java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.
	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.
	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)

Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 464.899 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.415 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.116 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.084 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.918 sec - in org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 267.807 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts
Compressed 846.00 MB of artifacts by 45.6% relative to #1463
Updating PHOENIX-3449
Updating PHOENIX-3454
Recording test results

Build failed in Jenkins: Phoenix | Master #1473

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1473/changes>

Changes:

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily

------------------------------------------
[...truncated 823753 lines...]
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:13,620 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:13,620 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:14,816 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:16,101 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:17,817 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:18,143 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,150 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,261 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,419 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,424 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:19,103 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:20,817 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:22,109 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:22,709 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:22,861 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:22,922 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:23,261 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:23,362 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:23,817 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:25,110 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:26,818 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:28,110 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:28,183 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,196 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,296 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,457 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,464 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,713 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:28,714 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:28,714 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:28,714 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:28,717 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:28,717 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:29,818 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:31,110 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:32,757 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:32,819 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:32,915 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:32,981 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:33,293 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:33,427 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:34,111 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:35,820 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:37,112 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:38,220 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,238 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,357 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,495 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,535 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,821 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:40,113 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:41,822 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:42,837 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:42,971 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,016 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,113 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:43,350 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,489 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,758 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:43,758 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:43,759 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:43,759 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:43,761 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:43,762 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:44,822 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:46,114 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:47,823 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:48,265 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,281 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,413 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,533 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,613 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:49,114 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:50,824 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:52,117 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:52,886 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,026 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,077 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,396 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,543 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,825 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:55,118 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:56,828 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:58,118 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:58,313 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,325 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,474 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,612 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,678 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:58,800 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:58,800 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:59,829 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:01,118 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:02,833 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:02,945 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,093 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,161 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,452 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,593 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:04,121 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:05,833 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:07,121 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:08,353 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,389 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,559 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,684 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,733 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,837 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:10,125 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:11,841 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:13,010 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,126 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:13,128 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,229 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,497 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,645 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,844 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:45:13,845 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:45:13,845 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:45:13,845 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:45:13,847 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:45:13,847 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:45:14,841 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:16,126 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:17,843 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:18,401 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,451 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,605 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,740 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,785 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:19,127 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:20,843 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:22,127 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:23,061 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,177 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,285 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,561 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,681 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,844 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:25,128 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:26,121 DEBUG [IPC Server handler 1 on 41607] org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer(1303): *BLOCK* NameNode.blockReport: from DatanodeRegistration(127.0.0.1:59945, datanodeUuid=c66f5ea1-0749-4ba2-88b5-42236077d574, infoPort=38428, infoSecurePort=0, ipcPort=42638, storageInfo=lv=-56;cid=testClusterID;nsid=303383496;c=0), reports.length=2
2016-11-04 19:45:26,125 INFO  [IPC Server handler 1 on 41607] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-69b57511-8b26-4753-8758-d363e10d2208 node DatanodeRegistration(127.0.0.1:59945, datanodeUuid=c66f5ea1-0749-4ba2-88b5-42236077d574, infoPort=38428, infoSecurePort=0, ipcPort=42638, storageInfo=lv=-56;cid=testClusterID;nsid=303383496;c=0), blocks: 195, hasStaleStorage: false, processing time: 4 msecs
2016-11-04 19:45:26,129 INFO  [IPC Server handler 1 on 41607] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-c1e676a6-a69a-4758-8880-cdc7ac376e03 node DatanodeRegistration(127.0.0.1:59945, datanodeUuid=c66f5ea1-0749-4ba2-88b5-42236077d574, infoPort=38428, infoSecurePort=0, ipcPort=42638, storageInfo=lv=-56;cid=testClusterID;nsid=303383496;c=0), blocks: 194, hasStaleStorage: false, processing time: 3 msecs
2016-11-04 19:45:26,844 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:28,128 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:28,452 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,490 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,680 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,809 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,828 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,930 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:45:28,930 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:45:28,931 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:45:28,931 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:45:28,936 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:45:28,936 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:45:29,844 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:31,129 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:32,845 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:33,113 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,239 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,381 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,631 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,745 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:34,129 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:35,845 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:37,133 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:38,505 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,547 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,751 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,847 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:38,854 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,881 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:40,134 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:41,848 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:43,137 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:43,165 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,303 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,432 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,633 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,809 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:45:43,984 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:45:43,984 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:45:44,848 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:46,138 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:47,849 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:48,577 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,598 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,812 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,902 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,925 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:49,141 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:50,853 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:52,144 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts
2016-11-04 19:45:52,315 INFO  [Thread-162] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-11-04 19:45:52,650 INFO  [Thread-3639] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
Compressed 700.08 MB of artifacts by 55.1% relative to #1463
Updating PHOENIX-3199
Recording test results

Build failed in Jenkins: Phoenix | Master #1472

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1472/changes>

Changes:

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be

------------------------------------------
[...truncated 790234 lines...]
2016-11-04 09:58:17,981 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:18,030 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:18,042 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:18,050 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:18,142 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:18,706 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:18,897 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:21,706 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:21,897 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:24,706 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:24,897 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:25,426 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:25,495 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:25,495 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:25,495 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:25,506 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:27,707 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:27,898 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:27,991 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:28,041 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:28,052 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:28,064 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:28,153 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:30,707 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:30,898 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:33,707 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:33,898 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:35,440 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:35,506 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:35,506 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:35,506 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:35,517 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:36,708 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:36,899 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:38,002 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:38,052 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:38,063 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:38,074 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:38,164 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:39,708 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:39,899 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:42,709 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:42,899 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:45,452 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:45,522 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:45,529 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:45,529 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:45,533 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:45,709 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:45,900 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:48,038 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:48,078 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:48,090 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:48,093 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:48,195 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:48,709 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:48,900 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:51,710 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:51,901 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:54,710 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:54,901 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:55,468 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:55,534 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:55,547 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:55,550 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:55,550 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:58:57,710 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:57,901 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:58,049 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:58,089 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:58,101 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:58,103 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:58:58,205 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:00,711 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:00,902 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:02,104 WARN  [snapshot-log-cleaner-cache-refresher] org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask(318): Failed to refresh snapshot hfile cache!
java.net.ConnectException: Call From pietas.apache.org/67.195.81.190 to localhost:34993 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor106.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy25.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:210)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.access$000(SnapshotFileCache.java:76)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask.run(SnapshotFileCache.java:316)
	at java.util.TimerThread.mainLoop(Timer.java:555)
	at java.util.TimerThread.run(Timer.java:505)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1446)
	... 30 more
2016-11-04 09:59:02,105 WARN  [snapshot-hfile-cleaner-cache-refresher] org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask(318): Failed to refresh snapshot hfile cache!
java.net.ConnectException: Call From pietas.apache.org/67.195.81.190 to localhost:34993 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.GeneratedConstructorAccessor106.newInstance(Unknown Source)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy25.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.refreshCache(SnapshotFileCache.java:210)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache.access$000(SnapshotFileCache.java:76)
	at org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache$RefreshCacheTask.run(SnapshotFileCache.java:316)
	at java.util.TimerThread.mainLoop(Timer.java:555)
	at java.util.TimerThread.run(Timer.java:505)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
	at org.apache.hadoop.ipc.Client.call(Client.java:1446)
	... 30 more
2016-11-04 09:59:03,711 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:03,902 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:05,480 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:05,545 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:05,559 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:05,561 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:05,561 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:06,711 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:06,902 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:08,060 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:08,101 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:08,112 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:08,113 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:08,216 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:09,712 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:09,903 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:12,712 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:12,903 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:15,505 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:15,557 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:15,573 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:15,575 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:15,577 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:15,713 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:15,903 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:18,082 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:18,124 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:18,127 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:18,131 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:18,245 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:18,713 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:18,904 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:21,713 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:21,904 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:24,714 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:24,904 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:25,523 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:25,571 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:25,586 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:25,586 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:25,601 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:27,714 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:27,905 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:28,093 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:28,134 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:28,140 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:28,141 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:28,255 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:30,714 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:30,905 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:33,715 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:33,905 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:35,534 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:35,581 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:35,597 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:35,597 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:35,612 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1363, waiting for 1  actions to finish
2016-11-04 09:59:36,715 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:36,906 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:38,104 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:38,146 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:38,151 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:38,151 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:38,266 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, waiting for 1  actions to finish
2016-11-04 09:59:39,715 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 09:59:39,906 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
2016-11-04 09:59:40,511 INFO  [Thread-8556] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
Archiving artifacts
Compressed 636.72 MB of artifacts by 52.4% relative to #1463
Error updating JIRA issues. Saving issues for next build.
java.lang.NullPointerException
Recording test results

Build failed in Jenkins: Phoenix | Master #1471

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1471/changes>

Changes:

[jamestaylor] PHOENIX-3421 Column name lookups fail when on an indexed table

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

------------------------------------------
[...truncated 856296 lines...]
2016-11-04 03:10:22,489 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:10:22,489 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:10:22,493 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:10:22,494 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:10:22,698 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:22,701 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:22,727 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:22,783 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:22,913 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:23,117 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:24,970 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:25,469 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:25,734 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:25,743 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:25,785 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:26,121 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:26,165 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:27,970 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:29,122 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:30,971 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:32,125 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:32,740 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:32,740 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:32,769 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:32,829 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:32,975 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:33,971 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:35,129 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:35,530 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:35,786 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:35,803 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:35,832 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:36,201 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:36,972 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:37,529 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 03:10:37,529 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 03:10:37,529 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:10:37,529 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:10:37,531 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:10:37,531 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:10:38,130 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:39,973 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:41,130 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:42,785 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:42,785 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:42,809 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:42,880 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:42,974 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:43,028 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:44,133 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:45,568 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:45,810 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:45,865 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:45,874 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:45,974 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:46,263 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:47,133 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:48,974 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:50,134 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:51,975 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:52,558 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 03:10:52,558 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 03:10:52,558 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:10:52,558 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:10:52,560 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:10:52,560 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:10:52,826 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:52,836 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:52,861 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:52,931 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:53,077 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:10:53,137 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:54,975 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:55,629 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:55,867 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:55,905 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:55,920 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:56,138 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:56,309 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:10:57,977 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:10:59,141 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:00,979 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:02,141 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:02,866 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:02,880 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:02,917 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:02,969 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:03,135 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:03,979 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:05,145 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:05,669 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:05,921 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:05,946 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:05,973 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:06,357 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:06,981 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:07,594 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 03:11:07,595 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 03:11:07,595 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:11:07,595 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:11:07,603 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:11:07,603 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:11:08,145 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:09,981 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:11,146 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:12,924 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:12,935 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:12,982 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:13,001 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:13,021 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:13,233 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:14,146 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:15,733 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:15,970 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:15,985 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:15,986 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:16,012 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:16,401 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:17,147 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:18,986 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:20,147 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:21,987 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:22,643 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 03:11:22,643 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 03:11:22,643 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:11:22,643 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:11:22,645 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:11:22,645 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:11:22,994 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:22,999 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:23,073 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:23,106 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:23,148 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:23,285 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:24,987 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:25,766 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:26,008 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:26,039 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:26,045 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:26,148 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:26,441 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:27,988 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:29,149 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:30,988 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:32,153 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:33,037 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:33,057 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:33,138 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:33,146 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:33,329 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:33,989 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:35,154 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:35,845 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:36,059 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:36,082 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:36,099 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:36,499 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:36,989 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:37,717 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 03:11:37,717 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 03:11:37,717 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:11:37,717 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:11:37,719 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:11:37,719 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:11:38,154 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:39,989 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:41,157 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:42,990 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:43,085 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:43,103 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:43,209 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:43,216 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:43,391 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:44,158 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:45,881 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:45,990 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:46,126 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:46,150 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:46,167 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:46,552 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:47,159 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:48,993 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:50,159 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:51,994 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:52,742 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 03:11:52,743 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 03:11:52,743 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 03:11:52,743 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 03:11:52,760 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 03:11:52,760 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 03:11:53,140 INFO  [B.defaultRpcServer.handler=1,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:53,157 INFO  [B.defaultRpcServer.handler=3,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:53,160 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:53,261 INFO  [B.defaultRpcServer.handler=2,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:53,270 INFO  [B.defaultRpcServer.handler=4,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:53,433 INFO  [B.defaultRpcServer.handler=0,queue=0,port=46026] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #412, waiting for 1  actions to finish
2016-11-04 03:11:54,995 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:55,930 INFO  [B.defaultRpcServer.handler=4,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:56,160 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:56,173 INFO  [B.defaultRpcServer.handler=3,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:56,197 INFO  [B.defaultRpcServer.handler=2,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:56,198 INFO  [B.defaultRpcServer.handler=0,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:56,613 INFO  [B.defaultRpcServer.handler=1,queue=0,port=38945] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #581, waiting for 1  actions to finish
2016-11-04 03:11:57,996 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 03:11:59,161 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3bebbb4] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts
2016-11-04 03:12:00,793 INFO  [Thread-3485] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-11-04 03:12:00,996 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6255b6b9] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
Compressed 723.76 MB of artifacts by 53.3% relative to #1463
Updating PHOENIX-3439
Updating PHOENIX-3421
Recording test results