You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/10/07 19:59:48 UTC

Build failed in Jenkins: Phoenix | Master #1433

See <https://builds.apache.org/job/Phoenix-master/1433/changes>

Changes:

[maryannxue] PHOENIX-3363 Join-related IT had problematic usage of generating new

------------------------------------------
[...truncated 740727 lines...]
2016-10-07 19:32:17,902 INFO  [RpcServer.responder] org.apache.hadoop.hbase.ipc.RpcServer$Responder(1003): RpcServer.responder: stopped
2016-10-07 19:32:17,902 INFO  [RpcServer.responder] org.apache.hadoop.hbase.ipc.RpcServer$Responder(906): RpcServer.responder: stopping
2016-10-07 19:32:17,954 DEBUG [M:0;asf910:53423] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(188): Node /hbase/rs/asf910.gq1.ygridcore.net,53423,1475868083953 already deleted, retry=false
2016-10-07 19:32:17,989 INFO  [M:0;asf910:53423] org.apache.hadoop.hbase.regionserver.HRegionServer(1104): stopping server asf910.gq1.ygridcore.net,53423,1475868083953; zookeeper connection closed.
2016-10-07 19:32:17,989 INFO  [M:0;asf910:53423] org.apache.hadoop.hbase.regionserver.HRegionServer(1107): M:0;asf910:53423 exiting
2016-10-07 19:32:18,106 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3472b89a] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-10-07 19:32:18,212 INFO  [main] org.apache.hadoop.hbase.util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete
2016-10-07 19:32:18,250 INFO  [main] org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster(316): Shutdown MiniZK cluster with all ZK servers
2016-10-07 19:32:18,250 WARN  [main] org.apache.hadoop.hdfs.server.datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called
2016-10-07 19:32:18,251 WARN  [ResponseProcessor for block BP-516771569-67.195.81.154-1475868079330:blk_1073741830_1006] org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor(824): DFSOutputStream ResponseProcessor exception  for block BP-516771569-67.195.81.154-1475868079330:blk_1073741830_1006
java.io.EOFException: Premature EOF: no length prefix available
	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:734)
2016-10-07 19:32:18,347 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=37959-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x1a893a23-0x157a097c4e20059, quorum=localhost:53119, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null
2016-10-07 19:32:18,348 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=37959-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(691): hconnection-0x1a893a23-0x157a097c4e20059, quorum=localhost:53119, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring
2016-10-07 19:32:18,373 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-66243862_1 at /127.0.0.1:37087 [Receiving block BP-516771569-67.195.81.154-1475868079330:blk_1073741830_1006]] org.apache.hadoop.hdfs.server.datanode.DataXceiver(278): 127.0.0.1:46208:DataXceiver error processing WRITE_BLOCK operation  src: /127.0.0.1:37087 dst: /127.0.0.1:46208
java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left.
	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
	at java.io.DataInputStream.read(DataInputStream.java:149)
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
	at java.lang.Thread.run(Thread.java:745)
2016-10-07 19:32:18,377 WARN  [DataNode: [[[DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/c608468b-e0d9-4907-b02d-7ab314d84a91/dfscluster_70e438da-7bba-4bbc-9915-3837888b8998/dfs/data/data1/,> [DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/c608468b-e0d9-4907-b02d-7ab314d84a91/dfscluster_70e438da-7bba-4bbc-9915-3837888b8998/dfs/data/data2/]]>  heartbeating to localhost/127.0.0.1:39989] org.apache.hadoop.hdfs.server.datanode.BPServiceActor(704): BPOfferService for Block pool BP-516771569-67.195.81.154-1475868079330 (Datanode Uuid 1a5be076-dd8b-4746-bf78-fd9541e07fad) service to localhost/127.0.0.1:39989 interrupted
2016-10-07 19:32:18,377 WARN  [DataNode: [[[DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/c608468b-e0d9-4907-b02d-7ab314d84a91/dfscluster_70e438da-7bba-4bbc-9915-3837888b8998/dfs/data/data1/,> [DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/c608468b-e0d9-4907-b02d-7ab314d84a91/dfscluster_70e438da-7bba-4bbc-9915-3837888b8998/dfs/data/data2/]]>  heartbeating to localhost/127.0.0.1:39989] org.apache.hadoop.hdfs.server.datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-516771569-67.195.81.154-1475868079330 (Datanode Uuid 1a5be076-dd8b-4746-bf78-fd9541e07fad) service to localhost/127.0.0.1:39989
2016-10-07 19:32:18,865 INFO  [main] org.apache.hadoop.hbase.HBaseTestingUtility(1103): Minicluster is down
2016-10-07 19:32:18,872 INFO  [Thread-3369] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-10-07 19:32:18,872 INFO  [Thread-162] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now

Results :

Tests in error: 
  AlterTableIT.testAddNewColumnFamilyProperties:1620 » PhoenixIO org.apache.phoe...
  LocalIndexIT.testLocalIndexScanJoinColumnsFromDataTable:231 » PhoenixIO org.ap...

Tests run: 1553, Failures: 0, Errors: 2, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.ArrayIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.351 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.885 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.857 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.316 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.572 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.119 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.758 sec - in org.apache.phoenix.end2end.DropSchemaIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.636 sec - in org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.176 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.28 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.29 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.164 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.056 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.58 sec - in org.apache.phoenix.end2end.NotQueryIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 199.25 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.891 sec - in org.apache.phoenix.end2end.GroupByIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.319 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 180.393 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Running org.apache.phoenix.end2end.QueryIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Running org.apache.phoenix.end2end.ScanQueryIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.519 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.SequenceIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.5 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.652 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.642 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.349 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.835 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.947 sec - in org.apache.phoenix.end2end.ScanQueryIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.122 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.099 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.84 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 222.187 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.448 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 237.4 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 270.096 sec - in org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 228.205 sec - in org.apache.phoenix.end2end.UpsertValuesIT

Results :

Tests run: 1356, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.804 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.796 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.2 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.09 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.097 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.311 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.908 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.782 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.059 sec - in org.apache.phoenix.end2end.RenewLeaseIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.253 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.768 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.022 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.075 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.94 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.618 sec - in org.apache.phoenix.execute.PartialCommitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.158 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.107 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 206.874 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 394.825 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.086 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.321 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 377.921 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.214 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 177.325 sec - in org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.478 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 258.759 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.042 sec - in org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT

Results :

Tests run: 211, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  6.149 s]
[INFO] Phoenix Core ....................................... FAILURE [43:34 min]
[INFO] Phoenix - Flume .................................... SKIPPED
[INFO] Phoenix - Pig ...................................... SKIPPED
[INFO] Phoenix Query Server Client ........................ SKIPPED
[INFO] Phoenix Query Server ............................... SKIPPED
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix - Spark .................................... SKIPPED
[INFO] Phoenix - Hive ..................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 43:44 min
[INFO] Finished at: 2016-10-07T19:55:17+00:00
[INFO] Final Memory: 76M/1013M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) on project phoenix-core: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/failsafe-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 776.71 MB of artifacts by 50.0% relative to #1432
Updating PHOENIX-3363
Recording test results

Jenkins build is back to normal : Phoenix | Master #1436

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1436/changes>


Build failed in Jenkins: Phoenix | Master #1435

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1435/changes>

Changes:

[elserj] PHOENIX-3240 Create phoenix-$VERSION-pig shaded jar

------------------------------------------
[...truncated 751595 lines...]
2016-10-11 21:13:32,395 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=59736] org.apache.hadoop.hbase.ipc.CallRunner(112): B.defaultRpcServer.handler=4,queue=0,port=59736: callId: 6423 service: ClientService methodName: Scan size: 26 connection: 10.20.2.231:58143
org.apache.hadoop.hbase.NotServingRegionException: Region TBL_T000733,,1476220377932.b9f5fbee5ba42ba1b75ebb35e09cc13b. is not online on jenkins-ubuntu2.apache.org,59736,1476220168181
	at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2911)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2888)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2379)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
2016-10-11 21:13:32,400 WARN  [B.defaultRpcServer.handler=1,queue=0,port=59736] org.apache.hadoop.hbase.regionserver.RSRpcServices(2688): 1595 encountered Region TBL_T000729,,1476220377930.a395ee738255857f2e65e1d5bdf0f345. is not online on jenkins-ubuntu2.apache.org,59736,1476220168181, closing ...
2016-10-11 21:13:32,401 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=59736] org.apache.hadoop.hbase.ipc.CallRunner(112): B.defaultRpcServer.handler=1,queue=0,port=59736: callId: 6429 service: ClientService methodName: Scan size: 26 connection: 10.20.2.231:58143
org.apache.hadoop.hbase.NotServingRegionException: Region TBL_T000729,,1476220377930.a395ee738255857f2e65e1d5bdf0f345. is not online on jenkins-ubuntu2.apache.org,59736,1476220168181
	at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2911)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2888)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2379)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
Running [MutableIndexIT_localIndex=true,transactional=true]
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 221.574 sec - in [MutableIndexIT_localIndex=true,transactional=true]
2016-10-11 21:13:32,430 INFO  [Thread-162] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-10-11 21:13:32,431 INFO  [Thread-7983] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now

Results :

Failed tests: 
  IndexIT.testDeleteFromAllPKColumnIndex:191 expected:<3> but was:<0>
  IndexIT.testMultipleUpdatesAcrossRegions:817
  IndexIT.testUpsertAfterIndexDrop:738
Tests in error: 
  IndexIT.testCreateIndexAfterUpsertStarted:230->testCreateIndexAfterUpsertStarted:309 » SQL

Tests run: 1548, Failures: 3, Errors: 1, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.676 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.642 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.641 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.106 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.713 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.014 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.14 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.513 sec - in org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.985 sec - in org.apache.phoenix.end2end.DropSchemaIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.538 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.863 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.969 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.346 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.779 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.675 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.165 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.568 sec - in org.apache.phoenix.end2end.NotQueryIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 155.858 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.712 sec - in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.06 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.798 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.176 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.TopNIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.576 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.938 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.846 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.559 sec - in org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.454 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.16 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.06 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.751 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.402 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.236 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.694 sec - in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.329 sec - in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1356, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.668 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.905 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.358 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.001 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.707 sec - in org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.753 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.519 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.015 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.133 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.05 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (sharedRuntime.cpp:814), pid=6783, tid=139743948322560
#  guarantee(cb->is_adapter_blob() || cb->is_method_handles_adapter_blob()) failed: exception happened outside interpreter, nmethods and vtable stubs (1)
#
# JRE version: Java(TM) SE Runtime Environment (7.0_80-b15) (build 1.7.0_80-b15)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.80-b11 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# <https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/hs_err_pid6783.log>
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#
Aborted
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.03 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.142 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.286 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.execute.PartialCommitIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.735 sec - in org.apache.phoenix.execute.PartialCommitIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.814 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.542 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.14 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 156.298 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.894 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 306.355 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.513 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.973 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.656 sec - in org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.795 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.402 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 284.566 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT

Results :

Tests run: 209, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  2.277 s]
[INFO] Phoenix Core ....................................... FAILURE [29:34 min]
[INFO] Phoenix - Flume .................................... SKIPPED
[INFO] Phoenix - Pig ...................................... SKIPPED
[INFO] Phoenix Query Server Client ........................ SKIPPED
[INFO] Phoenix Query Server ............................... SKIPPED
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix - Spark .................................... SKIPPED
[INFO] Phoenix - Hive ..................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 29:39 min
[INFO] Finished at: 2016-10-11T21:28:45+00:00
[INFO] Final Memory: 78M/1213M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) on project phoenix-core: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Updating PHOENIX-3240
Recording test results

Build failed in Jenkins: Phoenix | Master #1434

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1434/changes>

Changes:

[jamestaylor] PHOENIX-2675 Allow stats to be configured on a table-by-table basis

[jamestaylor] PHOENIX-3361 Collect stats correct for local indexes

------------------------------------------
[...truncated 659183 lines...]

	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportWithIndex(CsvBulkLoadToolIT.java:219)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportWithIndex(CsvBulkLoadToolIT.java:219)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportWithIndex(CsvBulkLoadToolIT.java:219)

testImportWithIndex(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  Time elapsed: 530.8 sec  <<< ERROR!
java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
Caused by: java.io.IOException: hconnection-0x3100dbfb closed

testImportOneIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  Time elapsed: 530.942 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
Failed after attempts=35, exceptions:
Mon Oct 10 22:49:35 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:35 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:35 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:36 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:37 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:39 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:43 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:53 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:03 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:13 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:23 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:43 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:51:03 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:51:23 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:51:43 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:52:03 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:52:24 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:52:44 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:53:04 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:53:24 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:53:44 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:54:04 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:54:24 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:54:44 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:55:04 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:55:25 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:55:45 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:56:05 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:56:25 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:56:45 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:57:05 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:57:25 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:57:45 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:58:05 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:58:26 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master

	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:304)
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:292)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: 
Failed after attempts=35, exceptions:
Mon Oct 10 22:49:35 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:35 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:35 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:36 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:37 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:39 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:43 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:49:53 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:03 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:13 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:23 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:50:43 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:51:03 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:51:23 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:51:43 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:52:03 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:52:24 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:52:44 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:53:04 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:53:24 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:53:44 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:54:04 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:54:24 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:54:44 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:55:04 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:55:25 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:55:45 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:56:05 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:56:25 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:56:45 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:57:05 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:57:25 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:57:45 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:58:05 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
Mon Oct 10 22:58:26 UTC 2016, RpcRetryingCaller{globalStartTime=1476139775180, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master

	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:304)
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:292)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:304)
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:292)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while trying to get master
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:304)
	at org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:292)

testImportOneIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  Time elapsed: 530.942 sec  <<< ERROR!
java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
Caused by: java.io.IOException: hconnection-0x3100dbfb closed

org.apache.phoenix.end2end.CsvBulkLoadToolIT  Time elapsed: 543.042 sec  <<< FAILURE!
java.lang.AssertionError


Results :

Failed tests: 
  CsvBulkLoadToolIT>BaseOwnClusterIT.doTeardown:30->BaseTest.tearDownMiniCluster:535->BaseTest.destroyDriver:518
  MutableIndexFailureIT>BaseOwnClusterIT.doTeardown:30->BaseTest.tearDownMiniCluster:535->BaseTest.destroyDriver:518
Tests in error: 
  WALReplayWithIndexWritesAndCompressedWALIT.testReplayEditsWrittenViaHRegion:239 » TimeoutIO
  ConnectionUtilIT.testInputAndOutputConnections:60 » PhoenixIO Failed after att...
org.apache.phoenix.end2end.CountDistinctCompressionIT.testDistinctCountOnColumn(org.apache.phoenix.end2end.CountDistinctCompressionIT)
  Run 1: CountDistinctCompressionIT.testDistinctCountOnColumn:64 » OutOfMemory unable t...
  Run 2: CountDistinctCompressionIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:904 » OutOfMemory

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testAlreadyExistsOutputPath(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testAlreadyExistsOutputPath:381
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testBasicImport(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testBasicImport:71 » PhoenixIO Failed after attempts=35, exc...
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testFullOptionImport(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testFullOptionImport:141 » PhoenixIO Failed after attempts=3...
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testImportOneIndexTable:292->testImportOneIndexTable:304 » PhoenixIO
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testImportOneLocalIndexTable:297->testImportOneIndexTable:304 » PhoenixIO
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportWithIndex(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testImportWithIndex:219 » PhoenixIO Failed after attempts=35...
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportWithLocalIndex(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testImportWithLocalIndex:254 » PhoenixIO Failed after attemp...
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportWithTabs(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testImportWithTabs:109 » PhoenixIO Failed after attempts=35,...
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

  CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testMultipleInputFiles(org.apache.phoenix.end2end.CsvBulkLoadToolIT)
  Run 1: CsvBulkLoadToolIT.testMultipleInputFiles:179 » PhoenixIO Failed after attempts...
  Run 2: CsvBulkLoadToolIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.RenewLeaseIT.org.apache.phoenix.end2end.RenewLeaseIT
  Run 1: RenewLeaseIT.doSetup:57->BaseTest.setUpTestDriver:561->BaseTest.checkClusterInitialized:483->BaseTest.setUpTestCluster:509->BaseTest.initMiniCluster:591 » Runtime
  Run 2: RenewLeaseIT>BaseOwnClusterIT.doTeardown:30->BaseTest.tearDownMiniCluster:545 » OutOfMemory

  UserDefinedFunctionsIT.testFunctionalIndexesWithUDFFunction:768 » PhoenixIO un...
org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=false,localIndex=false,isNamespaceMapped=false](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:143 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=false,localIndex=false,isNamespaceMapped=true](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:141 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=false,localIndex=true,isNamespaceMapped=false](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:143 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=false,localIndex=true,isNamespaceMapped=true](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:141 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=true,localIndex=false,isNamespaceMapped=false](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:143 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=true,localIndex=false,isNamespaceMapped=true](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:141 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=true,localIndex=true,isNamespaceMapped=false](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:143 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex[MutableIndexFailureIT_transactional=true,localIndex=true,isNamespaceMapped=true](org.apache.phoenix.end2end.index.MutableIndexFailureIT)
  Run 1: MutableIndexFailureIT.testWriteFailureDisablesIndex:127->helpTestWriteFailureDisablesIndex:141 » PhoenixIO
  Run 2: MutableIndexFailureIT>BaseOwnClusterIT.cleanUpAfterTest:35->BaseTest.deletePriorMetaData:857->BaseTest.deletePriorTables:865->BaseTest.deletePriorTables:876->BaseTest.deletePriorTables:901 » SQL

  MutableIndexReplicationIT.setUpBeforeClass:108->setupConfigsAndStartCluster:171 » OutOfMemory
  EndtoEndIndexingWithCompressionIT>EndToEndCoveredIndexingIT.testSimpleTimestampedUpdates:152->EndToEndCoveredIndexingIT.createSetupTables:888 » IO
  FailWithoutRetriesIT.setupCluster:84 » IO Shutting down
  RoundRobinResultIteratorWithStatsIT.doSetup:58->BaseTest.setUpTestDriver:557->BaseTest.setUpTestDriver:561->BaseTest.checkClusterInitialized:483->BaseTest.setUpTestCluster:509->BaseTest.initMiniCluster:591 » Runtime
  ScannerLeaseRenewalIT.setUp:90 » PhoenixIO java.lang.RuntimeException: java.la...
org.apache.phoenix.rpc.PhoenixClientRpcIT.org.apache.phoenix.rpc.PhoenixClientRpcIT
  Run 1: PhoenixClientRpcIT.doSetup:47->BaseTest.setUpTestDriver:561->BaseTest.checkClusterInitialized:483->BaseTest.setUpTestCluster:509->BaseTest.initMiniCluster:591 » Runtime
  Run 2: PhoenixClientRpcIT.cleanUpAfterTestSuite:53->BaseTest.tearDownMiniCluster:545 » OutOfMemory

org.apache.phoenix.rpc.PhoenixServerRpcIT.org.apache.phoenix.rpc.PhoenixServerRpcIT
  Run 1: PhoenixServerRpcIT.doSetup:71->BaseTest.setUpTestDriver:561->BaseTest.checkClusterInitialized:483->BaseTest.setUpTestCluster:509->BaseTest.initMiniCluster:591 » Runtime
  Run 2: PhoenixServerRpcIT>BaseOwnClusterIT.doTeardown:30->BaseTest.tearDownMiniCluster:545 » OutOfMemory


Tests run: 116, Failures: 2, Errors: 30, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  3.983 s]
[INFO] Phoenix Core ....................................... FAILURE [  02:12 h]
[INFO] Phoenix - Flume .................................... SKIPPED
[INFO] Phoenix - Pig ...................................... SKIPPED
[INFO] Phoenix Query Server Client ........................ SKIPPED
[INFO] Phoenix Query Server ............................... SKIPPED
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix - Spark .................................... SKIPPED
[INFO] Phoenix - Hive ..................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:13 h
[INFO] Finished at: 2016-10-10T22:59:10+00:00
[INFO] Final Memory: 60M/730M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) on project phoenix-core: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Updating PHOENIX-2675
Updating PHOENIX-3361
Recording test results