You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/11/04 19:51:45 UTC

Build failed in Jenkins: Phoenix | Master #1473

See <https://builds.apache.org/job/Phoenix-master/1473/changes>

Changes:

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily

------------------------------------------
[...truncated 823753 lines...]
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:13,620 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:13,620 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:14,816 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:16,101 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:17,817 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:18,143 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,150 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,261 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,419 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:18,424 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:19,103 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:20,817 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:22,109 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:22,709 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:22,861 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:22,922 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:23,261 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:23,362 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:23,817 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:25,110 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:26,818 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:28,110 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:28,183 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,196 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,296 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,457 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,464 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:28,713 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:28,714 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:28,714 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:28,714 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:28,717 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:28,717 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:29,818 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:31,110 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:32,757 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:32,819 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:32,915 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:32,981 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:33,293 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:33,427 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:34,111 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:35,820 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:37,112 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:38,220 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,238 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,357 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,495 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,535 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:38,821 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:40,113 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:41,822 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:42,837 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:42,971 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,016 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,113 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:43,350 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,489 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:43,758 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:43,758 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:43,759 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:43,759 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:43,761 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:43,762 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:44,822 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:46,114 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:47,823 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:48,265 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,281 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,413 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,533 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:48,613 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:49,114 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:50,824 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:52,117 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:52,886 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,026 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,077 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,396 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,543 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:44:53,825 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:55,118 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:56,828 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:58,118 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:58,313 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,325 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,474 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,612 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,678 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:58,797 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:58,800 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:58,800 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:59,829 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:01,118 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:02,833 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:02,945 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,093 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,161 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,452 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:03,593 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:04,121 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:05,833 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:07,121 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:08,353 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,389 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,559 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,684 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,733 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:08,837 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:10,125 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:11,841 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:13,010 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,126 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:13,128 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,229 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,497 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,645 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:13,844 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:45:13,845 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:45:13,845 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:45:13,845 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:45:13,847 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:45:13,847 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:45:14,841 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:16,126 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:17,843 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:18,401 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,451 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,605 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,740 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:18,785 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:19,127 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:20,843 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:22,127 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:23,061 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,177 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,285 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,561 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,681 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:23,844 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:25,128 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:26,121 DEBUG [IPC Server handler 1 on 41607] org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer(1303): *BLOCK* NameNode.blockReport: from DatanodeRegistration(127.0.0.1:59945, datanodeUuid=c66f5ea1-0749-4ba2-88b5-42236077d574, infoPort=38428, infoSecurePort=0, ipcPort=42638, storageInfo=lv=-56;cid=testClusterID;nsid=303383496;c=0), reports.length=2
2016-11-04 19:45:26,125 INFO  [IPC Server handler 1 on 41607] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-69b57511-8b26-4753-8758-d363e10d2208 node DatanodeRegistration(127.0.0.1:59945, datanodeUuid=c66f5ea1-0749-4ba2-88b5-42236077d574, infoPort=38428, infoSecurePort=0, ipcPort=42638, storageInfo=lv=-56;cid=testClusterID;nsid=303383496;c=0), blocks: 195, hasStaleStorage: false, processing time: 4 msecs
2016-11-04 19:45:26,129 INFO  [IPC Server handler 1 on 41607] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1862): BLOCK* processReport: from storage DS-c1e676a6-a69a-4758-8880-cdc7ac376e03 node DatanodeRegistration(127.0.0.1:59945, datanodeUuid=c66f5ea1-0749-4ba2-88b5-42236077d574, infoPort=38428, infoSecurePort=0, ipcPort=42638, storageInfo=lv=-56;cid=testClusterID;nsid=303383496;c=0), blocks: 194, hasStaleStorage: false, processing time: 3 msecs
2016-11-04 19:45:26,844 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:28,128 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:28,452 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,490 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,680 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,809 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,828 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:28,930 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:45:28,930 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:45:28,931 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:45:28,931 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:45:28,936 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:45:28,936 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:45:29,844 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:31,129 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:32,845 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:33,113 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,239 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,381 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,631 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:33,745 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:34,129 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:35,845 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:37,133 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:38,505 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,547 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,751 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,847 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:38,854 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:38,881 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:40,134 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:41,848 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:43,137 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:43,165 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,303 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,432 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,633 INFO  [B.defaultRpcServer.handler=4,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,809 INFO  [B.defaultRpcServer.handler=3,queue=0,port=50930] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #1009, waiting for 1  actions to finish
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:45:43,980 DEBUG [tx-state-refresh] org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:45:43,984 ERROR [HDFSTransactionStateStorage STARTING] org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
	at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
	at org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
	at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
	at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:45:43,984 INFO  [tx-state-refresh] org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize TransactionStateCache due to: java.lang.IllegalStateException: Snapshot directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:45:44,848 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:46,138 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:47,849 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:48,577 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,598 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,812 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,902 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:48,925 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, waiting for 1  actions to finish
2016-11-04 19:45:49,141 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:50,853 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-11-04 19:45:52,144 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts
2016-11-04 19:45:52,315 INFO  [Thread-162] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-11-04 19:45:52,650 INFO  [Thread-3639] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
Compressed 700.08 MB of artifacts by 55.1% relative to #1463
Updating PHOENIX-3199
Recording test results

Jenkins build is back to normal : Phoenix | Master #1476

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1476/changes>


Build failed in Jenkins: Phoenix | Master #1475

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1475/changes>

Changes:

[jamestaylor] PHOENIX-3456 Use unique table names for MutableIndexFailureIT

[jamestaylor] PHOENIX-3457 Disable parallel run of tests and increase memory

------------------------------------------
[...truncated 676 lines...]
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.027 sec - in org.apache.phoenix.end2end.QueryWithOffsetIT
Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.452 sec - in org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.ReverseFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.557 sec - in org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Running org.apache.phoenix.end2end.ReverseScanIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.379 sec - in org.apache.phoenix.end2end.ReverseFunctionIT
Running org.apache.phoenix.end2end.RoundFloorCeilFuncIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.504 sec - in org.apache.phoenix.end2end.ReverseScanIT
Running org.apache.phoenix.end2end.SerialIteratorsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.304 sec - in org.apache.phoenix.end2end.SerialIteratorsIT
Running org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.754 sec - in org.apache.phoenix.end2end.RegexpSplitFunctionIT
Running org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.268 sec - in org.apache.phoenix.end2end.ServerExceptionIT
Running org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.32 sec - in org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Running org.apache.phoenix.end2end.SkipScanQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.112 sec - in org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Running org.apache.phoenix.end2end.SortMergeJoinIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.469 sec - in org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.SortMergeJoinMoreIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.377 sec - in org.apache.phoenix.end2end.SortMergeJoinMoreIT
Running org.apache.phoenix.end2end.SortOrderIT
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.874 sec - in org.apache.phoenix.end2end.RoundFloorCeilFuncIT
Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.659 sec - in org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Running org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.264 sec - in org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.StatementHintsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.294 sec - in org.apache.phoenix.end2end.StatementHintsIT
Running org.apache.phoenix.end2end.StddevIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.3 sec - in org.apache.phoenix.end2end.StddevIT
Running org.apache.phoenix.end2end.StoreNullsIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.58 sec - in org.apache.phoenix.end2end.StoreNullsIT
Running org.apache.phoenix.end2end.StringIT
Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.909 sec - in org.apache.phoenix.end2end.SortOrderIT
Running org.apache.phoenix.end2end.StringToArrayFunctionIT
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007a3ad9000, 199868416, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 199868416 bytes for committing reserved memory.
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/hs_err_pid30565.log>
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.189 sec - in org.apache.phoenix.end2end.StringIT
Running org.apache.phoenix.end2end.SubqueryIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.354 sec - in org.apache.phoenix.end2end.StringToArrayFunctionIT

Results :

Tests run: 817, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.659 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.618 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.962 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.87 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.685 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.711 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.722 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.125 sec - in org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.926 sec - in org.apache.phoenix.end2end.DropSchemaIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.326 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.718 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.944 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.51 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.203 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.779 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.582 sec - in org.apache.phoenix.end2end.NotQueryIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.54 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.111 sec - in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.113 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ScanQueryIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.523 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.839 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.274 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.541 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.899 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.349 sec - in org.apache.phoenix.end2end.ScanQueryIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.998 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 127.658 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.617 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.944 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.492 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.149 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.66 sec - in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.847 sec - in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1358, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.189 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.711 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.205 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.392 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.094 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.789 sec - in org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.457 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.482 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.646 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.119 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.331 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.842 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.93 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Running org.apache.phoenix.execute.PartialCommitIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.349 sec - in org.apache.phoenix.execute.PartialCommitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.934 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.441 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.755 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 143.05 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 242.564 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.468 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.831 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.944 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.088 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.996 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.075 sec - in org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 304.975 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.651 sec - in org.apache.phoenix.iterate.ScannerLeaseRenewalIT

Results :

Tests run: 211, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  4.142 s]
[INFO] Phoenix Core ....................................... FAILURE [31:39 min]
[INFO] Phoenix - Flume .................................... SKIPPED
[INFO] Phoenix - Pig ...................................... SKIPPED
[INFO] Phoenix Query Server Client ........................ SKIPPED
[INFO] Phoenix Query Server ............................... SKIPPED
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix - Spark .................................... SKIPPED
[INFO] Phoenix - Hive ..................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 31:46 min
[INFO] Finished at: 2016-11-05T07:10:32+00:00
[INFO] Final Memory: 79M/1077M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) on project phoenix-core: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 969.97 MB of artifacts by 67.7% relative to #1463
Updating PHOENIX-3456
Updating PHOENIX-3457
Recording test results

Build failed in Jenkins: Phoenix | Master #1474

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1474/changes>

Changes:

[jamestaylor] PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be

------------------------------------------
[...truncated 801328 lines...]
  IndexIT.testSelectDistinctOnTableWithSecondaryImmutableIndex:422 » PhoenixIO c...
  IndexIT.testTableDescriptorPriority:1068 » PhoenixIO callTimeout=1200000, call...
  IndexIT.testUpsertAfterIndexDrop:720 » PhoenixIO callTimeout=1200000, callDura...
  LocalIndexIT.testLocalIndexRoundTrip:100 » PhoenixIO org.apache.phoenix.except...
  LocalIndexIT.testLocalIndexRoundTrip:100 » PhoenixIO org.apache.phoenix.except...
  LocalIndexIT.testLocalIndexScanWithInList:531 » PhoenixIO org.apache.phoenix.e...

Tests run: 1660, Failures: 2, Errors: 23, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.AggregateQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.946 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.735 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.292 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.2 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.576 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.617 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.104 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.506 sec - in org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.93 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.296 sec - in org.apache.phoenix.end2end.DropSchemaIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.174 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.576 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 196.385 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.675 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 195.197 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.004 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.422 sec - in org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 140.832 sec - in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.691 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 319.563 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.SequenceIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.525 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.399 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.485 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.716 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 192.701 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.245 sec - in org.apache.phoenix.end2end.ScanQueryIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.845 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.455 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.914 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 248.104 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 265.305 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.061 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 237.556 sec - in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 304.368 sec - in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1358, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.246 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.501 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.615 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.323 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.171 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.002 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 140.915 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.659 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.146 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.108 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.65 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 219.8 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.282 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.635 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 465.135 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.225 sec - in org.apache.phoenix.execute.PartialCommitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.235 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.474 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.67 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.053 sec <<< FAILURE! - in org.apache.phoenix.end2end.RenewLeaseIT
org.apache.phoenix.end2end.RenewLeaseIT  Time elapsed: 0.053 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
Failed after attempts=35, exceptions:
Sat Nov 05 05:59:10 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=26, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=28, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:14 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=30, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:17 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=32, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:20 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 05:59:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=36, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:30 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=38, waitTime=2004, operationTimeout=2000 expired.
Sat Nov 05 05:59:42 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=40, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:54 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=42, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:00:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=44, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:00:18 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=46, waitTime=2002, operationTimeout=2000 expired.
Sat Nov 05 06:00:40 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=48, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 06:01:02 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=50, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=52, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:46 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=54, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=56, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:31 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=58, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:02:53 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=60, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:15 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=62, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:37 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=64, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:04:00 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=66, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:22 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=68, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:44 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=70, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=72, waitTime=2008, operationTimeout=2000 expired.
Sat Nov 05 06:05:28 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=74, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:50 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=76, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=78, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:34 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=80, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:57 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=82, waitTime=2006, operationTimeout=2000 expired.
Sat Nov 05 06:07:19 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=84, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:07:41 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=86, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:03 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=88, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:25 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=90, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:47 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=92, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:09:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.

	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: 
Failed after attempts=35, exceptions:
Sat Nov 05 05:59:10 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=26, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=28, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:14 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=30, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 05:59:17 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=32, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:20 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=34, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 05:59:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=36, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:30 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=38, waitTime=2004, operationTimeout=2000 expired.
Sat Nov 05 05:59:42 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=40, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 05:59:54 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=42, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:00:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=44, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:00:18 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=46, waitTime=2002, operationTimeout=2000 expired.
Sat Nov 05 06:00:40 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=48, waitTime=2007, operationTimeout=2000 expired.
Sat Nov 05 06:01:02 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=50, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:24 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=52, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:01:46 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=54, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=56, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:02:31 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=58, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:02:53 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=60, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:15 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=62, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:03:37 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=64, waitTime=2003, operationTimeout=2000 expired.
Sat Nov 05 06:04:00 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=66, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:22 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=68, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:04:44 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=70, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:06 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=72, waitTime=2008, operationTimeout=2000 expired.
Sat Nov 05 06:05:28 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=74, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:05:50 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=76, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:12 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=78, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:34 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=80, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:06:57 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=82, waitTime=2006, operationTimeout=2000 expired.
Sat Nov 05 06:07:19 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=84, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:07:41 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=86, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:03 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=88, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:25 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=90, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:08:47 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=92, waitTime=2001, operationTimeout=2000 expired.
Sat Nov 05 06:09:09 UTC 2016, RpcRetryingCaller{globalStartTime=1478325548345, pause=100, retries=35}, java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.

	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)
Caused by: java.io.IOException: Call to priapus.apache.org/67.195.81.188:52577 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.
	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=94, waitTime=2005, operationTimeout=2000 expired.
	at org.apache.phoenix.end2end.RenewLeaseIT.doSetup(RenewLeaseIT.java:57)

Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 464.899 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.415 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.116 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.084 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.918 sec - in org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 267.807 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts
Compressed 846.00 MB of artifacts by 45.6% relative to #1463
Updating PHOENIX-3449
Updating PHOENIX-3454
Recording test results