You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/10/30 01:25:40 UTC

Build failed in Jenkins: Phoenix | Master #1461

See <https://builds.apache.org/job/Phoenix-master/1461/changes>

Changes:

[jamestaylor] PHOENIX-3424 Backward compatibility failure: 4.8 -> 4.9 upgrade

------------------------------------------
[...truncated 775960 lines...]
2016-10-30 00:51:07,583 INFO  [main-EventThread] org.apache.hadoop.hbase.zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted, processing expiration [pietas.apache.org,54158,1477788349446]
2016-10-30 00:51:07,584 INFO  [main-EventThread] org.apache.hadoop.hbase.master.ServerManager(612): Cluster shutdown set; pietas.apache.org,54158,1477788349446 expired; onlineServers=0
2016-10-30 00:51:07,584 DEBUG [main-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): regionserver:54158-0x158130cb1e00001, quorum=localhost:52122, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/pietas.apache.org,54158,1477788349446
2016-10-30 00:51:07,584 DEBUG [main-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): regionserver:54158-0x158130cb1e00001, quorum=localhost:52122, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs
2016-10-30 00:51:07,584 DEBUG [main-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): master:42311-0x158130cb1e00000, quorum=localhost:52122, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs
2016-10-30 00:51:07,584 DEBUG [M:0;pietas:42311] org.apache.hadoop.hbase.master.HMaster(1126): Stopping service threads
2016-10-30 00:51:07,585 INFO  [RS:0;pietas:54158] org.apache.hadoop.hbase.regionserver.HRegionServer(1104): stopping server pietas.apache.org,54158,1477788349446; zookeeper connection closed.
2016-10-30 00:51:07,585 INFO  [RS:0;pietas:54158] org.apache.hadoop.hbase.regionserver.HRegionServer(1107): RS:0;pietas:54158 exiting
2016-10-30 00:51:07,585 INFO  [M:0;pietas:42311] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158130cb1e00004
2016-10-30 00:51:07,586 DEBUG [M:0;pietas:42311] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 00:51:07,586 DEBUG [main-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): master:42311-0x158130cb1e00000, quorum=localhost:52122, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master
2016-10-30 00:51:07,587 DEBUG [main-EventThread] org.apache.hadoop.hbase.zookeeper.ZKUtil(370): master:42311-0x158130cb1e00000, quorum=localhost:52122, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master
2016-10-30 00:51:07,587 INFO  [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f7a5e46] org.apache.hadoop.hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(191): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1f7a5e46
2016-10-30 00:51:07,588 INFO  [main] org.apache.hadoop.hbase.util.JVMClusterUtil(317): Shutdown of 1 master(s) and 1 regionserver(s) complete
2016-10-30 00:51:07,597 INFO  [M:0;pietas:42311] org.apache.hadoop.hbase.ChoreService(323): Chore service for: pietas.apache.org,42311,1477788349359_splitLogManager_ had [] on shutdown
2016-10-30 00:51:07,598 INFO  [M:0;pietas:42311] org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager(79): stop: server shutting down.
2016-10-30 00:51:07,598 INFO  [M:0;pietas:42311] org.apache.hadoop.hbase.ipc.RpcServer(2277): Stopping server on 42311
2016-10-30 00:51:07,599 INFO  [RpcServer.listener,port=42311] org.apache.hadoop.hbase.ipc.RpcServer$Listener(761): RpcServer.listener,port=42311: stopping
2016-10-30 00:51:07,599 INFO  [RpcServer.responder] org.apache.hadoop.hbase.ipc.RpcServer$Responder(1003): RpcServer.responder: stopped
2016-10-30 00:51:07,599 INFO  [RpcServer.responder] org.apache.hadoop.hbase.ipc.RpcServer$Responder(906): RpcServer.responder: stopping
2016-10-30 00:51:07,601 DEBUG [M:0;pietas:42311] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(188): Node /hbase/rs/pietas.apache.org,42311,1477788349359 already deleted, retry=false
2016-10-30 00:51:07,601 INFO  [M:0;pietas:42311] org.apache.hadoop.hbase.regionserver.HRegionServer(1104): stopping server pietas.apache.org,42311,1477788349359; zookeeper connection closed.
2016-10-30 00:51:07,601 INFO  [M:0;pietas:42311] org.apache.hadoop.hbase.regionserver.HRegionServer(1107): M:0;pietas:42311 exiting
2016-10-30 00:51:07,606 INFO  [main] org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster(316): Shutdown MiniZK cluster with all ZK servers
2016-10-30 00:51:07,606 WARN  [main] org.apache.hadoop.hdfs.server.datanode.DirectoryScanner(378): DirectoryScanner: shutdown has been called
2016-10-30 00:51:07,704 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=54158-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x232bb8ba-0x158130cb1e00013, quorum=localhost:52122, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null
2016-10-30 00:51:07,704 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=54158-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(691): hconnection-0x232bb8ba-0x158130cb1e00013, quorum=localhost:52122, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring
2016-10-30 00:51:07,704 DEBUG [pietas:42311.activeMasterManager-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): replicationLogCleaner-0x158130cb1e00005, quorum=localhost:52122, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null
2016-10-30 00:51:07,704 DEBUG [pietas:42311.activeMasterManager-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(691): replicationLogCleaner-0x158130cb1e00005, quorum=localhost:52122, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring
2016-10-30 00:51:07,755 WARN  [DataNode: [[[DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/76bc23a5-9da8-4f73-a14d-c9f981f311d5/dfscluster_da06be4f-c420-4ba8-abdd-a2674bdcc1d8/dfs/data/data1/,> [DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/76bc23a5-9da8-4f73-a14d-c9f981f311d5/dfscluster_da06be4f-c420-4ba8-abdd-a2674bdcc1d8/dfs/data/data2/]]>  heartbeating to localhost/127.0.0.1:32864] org.apache.hadoop.hdfs.server.datanode.BPServiceActor(704): BPOfferService for Block pool BP-1921219741-67.195.81.190-1477788348118 (Datanode Uuid 171f8b3b-83c7-4a94-b336-fad77a40895c) service to localhost/127.0.0.1:32864 interrupted
2016-10-30 00:51:07,755 WARN  [DataNode: [[[DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/76bc23a5-9da8-4f73-a14d-c9f981f311d5/dfscluster_da06be4f-c420-4ba8-abdd-a2674bdcc1d8/dfs/data/data1/,> [DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/test-data/76bc23a5-9da8-4f73-a14d-c9f981f311d5/dfscluster_da06be4f-c420-4ba8-abdd-a2674bdcc1d8/dfs/data/data2/]]>  heartbeating to localhost/127.0.0.1:32864] org.apache.hadoop.hdfs.server.datanode.BPServiceActor(834): Ending block pool service for: Block pool BP-1921219741-67.195.81.190-1477788348118 (Datanode Uuid 171f8b3b-83c7-4a94-b336-fad77a40895c) service to localhost/127.0.0.1:32864
2016-10-30 00:51:07,995 INFO  [main] org.apache.hadoop.hbase.HBaseTestingUtility(1103): Minicluster is down
2016-10-30 00:51:08,000 INFO  [Thread-2] org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@5cf0c115
2016-10-30 00:51:08,000 INFO  [Thread-2] org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished.
2016-10-30 00:51:08,001 INFO  [Thread-2] org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@5cf0c115
2016-10-30 00:51:08,001 INFO  [Thread-2] org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread(133): Shutdown hook finished.
2016-10-30 00:51:08,001 INFO  [Thread-2002] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-10-30 00:51:08,001 INFO  [Thread-2] org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread(111): Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@5cf0c115
2016-10-30 00:51:08,001 INFO  [Thread-2] org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread(120): Starting fs shutdown hook thread.

Results :

Failed tests: 
  UpgradeIT.testAcquiringAndReleasingUpgradeMutex:709
  UpgradeIT.testConcurrentUpgradeThrowsUprgadeInProgressException:739 One of the threads should have acquired the mutex
Tests in error: 
  AlterTableIT.testAddNewColumnFamilyProperties:1620 ยป PhoenixIO org.apache.phoe...

Tests run: 1655, Failures: 2, Errors: 1, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.005 sec - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.329 sec - in org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.326 sec - in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.26 sec - in org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.524 sec - in org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.508 sec - in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.928 sec - in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.8 sec - in org.apache.phoenix.end2end.DistinctCountIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.998 sec - in org.apache.phoenix.end2end.DropSchemaIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.GroupByIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.77 sec - in org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.415 sec - in org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 79, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.883 sec - in org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.705 sec - in org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.043 sec - in org.apache.phoenix.end2end.PointInTimeQueryIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.839 sec - in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.038 sec - in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.763 sec - in org.apache.phoenix.end2end.NotQueryIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.271 sec - in org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 105, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.683 sec - in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 145.792 sec - in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.867 sec - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Tests run: 126, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.325 sec - in org.apache.phoenix.end2end.QueryIT
Tests run: 119, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.628 sec - in org.apache.phoenix.end2end.ScanQueryIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.825 sec - in org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.UpsertSelectIT
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.875 sec - in org.apache.phoenix.end2end.SequenceIT
Running org.apache.phoenix.end2end.UpsertValuesIT
Running org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.TruncateFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.146 sec - in org.apache.phoenix.end2end.TopNIT
Running org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.804 sec - in org.apache.phoenix.end2end.TruncateFunctionIT
Running org.apache.phoenix.end2end.salted.SaltedTableIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.719 sec - in org.apache.phoenix.end2end.salted.SaltedTableIT
Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.125 sec - in org.apache.phoenix.rpc.UpdateCacheWithScnIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.691 sec - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.461 sec - in org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.61 sec - in org.apache.phoenix.end2end.VariableLengthPKIT
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.207 sec - in org.apache.phoenix.end2end.UpsertValuesIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 134.22 sec - in org.apache.phoenix.end2end.UpsertSelectIT

Results :

Tests run: 1358, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (HBaseManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (NeedTheirOwnClusterTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.644 sec - in org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
Running org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.289 sec - in org.apache.phoenix.end2end.ConnectionUtilIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.366 sec - in org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.204 sec - in org.apache.phoenix.end2end.CountDistinctCompressionIT
Running org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.end2end.QueryTimeoutIT
Running org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.339 sec - in org.apache.phoenix.end2end.RenewLeaseIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.955 sec - in org.apache.phoenix.end2end.QueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.814 sec - in org.apache.phoenix.end2end.QueryWithLimitIT
Running org.apache.phoenix.end2end.SpillableGroupByIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.933 sec - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.002 sec - in org.apache.phoenix.end2end.SpillableGroupByIT
Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.end2end.index.ImmutableIndexIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Running org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.573 sec - in org.apache.phoenix.end2end.index.ReadOnlyIndexFailureIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.635 sec - in org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.099 sec - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.328 sec - in org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.execute.PartialCommitIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.645 sec - in org.apache.phoenix.execute.PartialCommitIT
Running org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.718 sec - in org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.111 sec - in org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.383 sec - in org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 162.692 sec - in org.apache.phoenix.end2end.index.ImmutableIndexIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.664 sec - in org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT
Running org.apache.phoenix.iterate.ScannerLeaseRenewalIT
Running org.apache.phoenix.monitoring.PhoenixMetricsIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 281.582 sec - in org.apache.phoenix.end2end.IndexExtendedIT
Running org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.091 sec - in org.apache.phoenix.rpc.PhoenixClientRpcIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 133.723 sec - in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 140.861 sec - in org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.142 sec - in org.apache.phoenix.monitoring.PhoenixMetricsIT
Running org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.277 sec - in org.apache.phoenix.rpc.PhoenixServerRpcIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 334.465 sec - in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.889 sec - in org.apache.phoenix.iterate.ScannerLeaseRenewalIT

Results :

Tests run: 211, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) @ phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  4.084 s]
[INFO] Phoenix Core ....................................... FAILURE [27:03 min]
[INFO] Phoenix - Flume .................................... SKIPPED
[INFO] Phoenix - Pig ...................................... SKIPPED
[INFO] Phoenix Query Server Client ........................ SKIPPED
[INFO] Phoenix Query Server ............................... SKIPPED
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix - Spark .................................... SKIPPED
[INFO] Phoenix - Hive ..................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 27:09 min
[INFO] Finished at: 2016-10-30T01:05:51+00:00
[INFO] Final Memory: 75M/1044M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:2.19.1:verify (ParallelStatsEnabledTest) on project phoenix-core: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Phoenix-master/ws/phoenix-core/target/failsafe-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 803.25 MB of artifacts by 41.6% relative to #1455
Error updating JIRA issues. Saving issues for next build.
java.lang.NullPointerException
Recording test results

Jenkins build is back to normal : Phoenix | Master #1463

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1463/>


Build failed in Jenkins: Phoenix | Master #1462

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Phoenix-master/1462/changes>

Changes:

[jamestaylor] PHOENIX-3424 Backward compatibility failure: 4.8 -> 4.9 upgrade

------------------------------------------
[...truncated 776279 lines...]
2016-10-30 15:56:28,696 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x438f7b86 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:28,725 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x438f7b860x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:28,729 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x438f7b86-0x158164c460900ab connected
2016-10-30 15:56:28,735 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7851b066, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:28,747 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900ab
2016-10-30 15:56:28,767 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:28,768 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t5] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x19"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x18"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x17"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:28,777 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xf1f9b37 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:28,817 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0xf1f9b370x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:28,818 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c3fb044, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:28,819 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0xf1f9b37-0x158164c460900ac connected
2016-10-30 15:56:28,832 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900ac
2016-10-30 15:56:28,858 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:28,860 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t6] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x1A"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x1C"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x1B"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:28,875 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6f691fa2 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:28,914 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x6f691fa20x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:28,915 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x6f691fa2-0x158164c460900ad connected
2016-10-30 15:56:28,915 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57b60f76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:28,920 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900ad
2016-10-30 15:56:28,939 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:28,940 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t7] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x1E"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x1D"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x1F"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:28,970 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7b7699e7 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,006 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x7b7699e70x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,006 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a6ef09b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,010 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x7b7699e7-0x158164c460900ae connected
2016-10-30 15:56:29,012 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900ae
2016-10-30 15:56:29,030 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,031 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t8] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842988548000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00 "}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,041 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: a4a5bb9a-4993-b831-76a8-1ab06a76274b, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":3}
2016-10-30 15:56:29,048 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x43077b35 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,081 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x43077b350x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,082 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@596b599a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,082 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x43077b35-0x158164c460900af connected
2016-10-30 15:56:29,086 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900af
2016-10-30 15:56:29,106 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,107 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t9] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00!"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\""}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00#"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,118 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x66294606 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,164 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x662946060x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,165 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11c040b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,165 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x66294606-0x158164c460900b0 connected
2016-10-30 15:56:29,168 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b0
2016-10-30 15:56:29,189 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,190 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t10] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00$"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00%"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00&"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,199 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1973bbe6 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,217 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x1973bbe60x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,217 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57e15201, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,218 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x1973bbe6-0x158164c460900b1 connected
2016-10-30 15:56:29,238 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b1
2016-10-30 15:56:29,250 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,253 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t1] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00'"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00("}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00)"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,266 INFO  [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x12ff28c4 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,289 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x12ff28c40x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,290 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68372e1a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,291 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x12ff28c4-0x158164c460900b2 connected
2016-10-30 15:56:29,295 INFO  [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b2
2016-10-30 15:56:29,315 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,315 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t3] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00*"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00+"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00,"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,322 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x158af624 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,340 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x158af6240x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,342 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x158af624-0x158164c460900b3 connected
2016-10-30 15:56:29,346 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@715d528f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,366 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b3
2016-10-30 15:56:29,390 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,393 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t4] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00-"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00."}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00/"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,405 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x1a0fb78e connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,456 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x1a0fb78e0x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,456 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@364e2ef3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,457 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x1a0fb78e-0x158164c460900b4 connected
2016-10-30 15:56:29,461 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b4
2016-10-30 15:56:29,489 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,489 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t2] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x000"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x002"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x001"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,496 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x600a49d7 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,523 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x600a49d70x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,525 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x600a49d7-0x158164c460900b5 connected
2016-10-30 15:56:29,534 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18922ce0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,545 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b5
2016-10-30 15:56:29,581 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,581 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t5] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x004"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x003"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x005"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,589 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xb1e8696 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,618 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0xb1e86960x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,619 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0xb1e8696-0x158164c460900b6 connected
2016-10-30 15:56:29,620 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2fa265bd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,625 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b6
2016-10-30 15:56:29,666 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,667 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t6] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x008"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x007"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x006"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,681 INFO  [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x24aba11c connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,695 DEBUG [org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7e752f44] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* neededReplications = 0 pendingReplications = 0
2016-10-30 15:56:29,707 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@125a7256, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,708 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x24aba11c0x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,710 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x24aba11c-0x158164c460900b7 connected
2016-10-30 15:56:29,711 INFO  [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b7
2016-10-30 15:56:29,742 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,742 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t7] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00:"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x009"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00;"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,753 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7a6ffae4 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,774 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x7a6ffae40x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,775 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2bf66009, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,776 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x7a6ffae4-0x158164c460900b8 connected
2016-10-30 15:56:29,780 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b8
2016-10-30 15:56:29,782 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo(376): Principal and keytab not provided, not attempting Kerberos login
2016-10-30 15:56:29,797 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.phoenix.coprocessor.MetaDataEndpointImpl(549): Caching table \x00TBL_T001055\x00IDX_T001056 at seqNum 0 with newer timestamp 1477842982818 versus -1
2016-10-30 15:56:29,798 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.phoenix.coprocessor.MetaDataEndpointImpl(549): Caching table \x00TBL_T001055\x00TBL_T0010551 at seqNum 0 with newer timestamp 1477842982818 versus -1
2016-10-30 15:56:29,802 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,802 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t8] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00<"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00>"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00="}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,817 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 441f796f-4d89-6ef1-42b2-ea83da591a69, Time: 0ms, Scan: {"timeRange":[0,1477842989806],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":2,"cacheBlocks":true,"families":{"0":["FK1B","_0"]},"maxResultSize":-1,"maxVersions":1,"caching":2147483647}
2016-10-30 15:56:29,819 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(380): Starting ungrouped coprocessor scan {"timeRange":[0,1477842989808000001],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":true,"totalColumns":3,"cacheBlocks":true,"families":{"0":["","FK1B","_0"]},"maxResultSize":2097152,"maxVersions":2147483647,"filter":"CellSkipFilter","caching":2147483647} {ENCODED => 54e182838458f72caf4245ceec299d52, NAME => 'TBL_T001055.TBL_T0010551,,1477842976117.54e182838458f72caf4245ceec299d52.', STARTKEY => '', ENDKEY => ''}
2016-10-30 15:56:29,819 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(639): Finished scanning 0 rows for ungrouped coprocessor scan {"timeRange":[0,1477842989808000001],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":true,"totalColumns":3,"cacheBlocks":true,"families":{"0":["","FK1B","_0"]},"maxResultSize":2097152,"maxVersions":2147483647,"filter":"CellSkipFilter","caching":2147483647}
2016-10-30 15:56:29,823 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x56d0820 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,826 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.phoenix.coprocessor.MetaDataEndpointImpl(549): Caching table \x00TBL_T001055\x00IDX_T001056 at seqNum 0 with newer timestamp 1477842989821 versus -1
2016-10-30 15:56:29,830 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.phoenix.coprocessor.MetaDataEndpointImpl(549): Caching table \x00TBL_T001055\x00TBL_T0010551 at seqNum 0 with newer timestamp 1477842989821 versus -1
2016-10-30 15:56:29,831 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.phoenix.coprocessor.MetaDataEndpointImpl(549): Caching table \x00TBL_T001055\x00TBL_T0010552 at seqNum 0 with newer timestamp 1477842980424 versus -1
2016-10-30 15:56:29,831 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x56d08200x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,832 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d54dd17, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,834 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x56d0820-0x158164c460900b9 connected
2016-10-30 15:56:29,834 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2f886548 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,837 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900b9
2016-10-30 15:56:29,848 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x2f8865480x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,849 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x2f886548-0x158164c460900ba connected
2016-10-30 15:56:29,865 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,865 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@795aaa7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,865 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t9] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00?"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989036000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00@"}] to table: TBL_T001048.TBL_T001048
2016-10-30 15:56:29,869 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900ba
2016-10-30 15:56:29,881 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,882 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t10] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989828000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x03"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989828000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x03\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x02"}, {"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989828000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x04\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:29,882 INFO  [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t10] org.apache.hadoop.hbase.regionserver.HRegion(8030): writing data to region TBL_T001055.TBL_T0010551,,1477842976117.54e182838458f72caf4245ceec299d52. with WAL disabled. Data may be lost in the event of a crash.
2016-10-30 15:56:29,900 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: fe5b74be-2775-d57f-6dec-b8f56e97db9b, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FilterList AND (2/2): [FirstKeyOnlyFilter, (\"ID1\" = TO_INTEGER(\"FK1B\") AND TO_INTEGER(\"FK1B\") = \"ID1\")]","caching":2147483647}
2016-10-30 15:56:29,900 DEBUG [phoenix-7-thread-0] org.apache.phoenix.util.ReadOnlyProps(317): Creating new ReadOnlyProps due to phoenix.table.istransactional.default with null!=true
2016-10-30 15:56:29,908 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x7bf2dda4 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:29,924 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x7bf2dda40x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:29,925 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x7bf2dda4-0x158164c460900bb connected
2016-10-30 15:56:29,934 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@561fc7ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:29,945 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900bb
2016-10-30 15:56:29,956 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:29,957 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t1] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989891000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x03\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x02"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:29,963 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: e2840cec-83a2-d90f-6924-bb2ebd1053a1, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:29,971 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 7ba89eba-47d6-2485-8c40-e4b0f3be06b9, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,009 DEBUG [phoenix-7-thread-0] org.apache.phoenix.compile.FromCompiler$BaseColumnResolver(572): Re-resolved stale table TBL_T001055.TBL_T0010551 with seqNum 0 at timestamp 1477842989821 with 3 columns: [ID1, 0.FK1A, 0.FK1B]
2016-10-30 15:56:30,010 DEBUG [phoenix-7-thread-0] org.apache.phoenix.execute.BaseQueryPlan(338): Scan ready for iteration: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01","stopRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x02","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["FK1A"]},"maxResultSize":-1,"maxVersions":1,"caching":2147483647}
2016-10-30 15:56:30,010 DEBUG [phoenix-7-thread-0] org.apache.phoenix.execute.BaseQueryPlan(355): Iterator ready: org.apache.phoenix.execute.BaseQueryPlan$1@67704fcc
2016-10-30 15:56:30,010 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.BaseResultIterators(680): Getting iterators for ResultIterators [name=PARALLEL,id=63f115db-b9b3-54f8-3145-0f6288c25d2e,scans=[[{"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01","stopRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x02","loadColumnFamiliesOnDemand":null,"totalColumns":2,"cacheBlocks":true,"families":{"0":["FK1A","_0"]},"maxResultSize":-1,"maxVersions":1,"caching":2147483647}]]]
2016-10-30 15:56:30,018 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver(371): Grouped aggregation over unordered rows with scan {"timeRange":[0,1477842989993000001],"batch":-1,"startRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01","stopRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x02","loadColumnFamiliesOnDemand":true,"totalColumns":3,"cacheBlocks":true,"families":{"0":["","FK1A","_0"]},"maxResultSize":2097152,"maxVersions":2147483647,"filter":"CellSkipFilter","caching":2147483647}, group by [TBL_T001055.TBL_T0010551.FK1A], aggregators org.apache.phoenix.expression.aggregator.ServerAggregators [0]:
2016-10-30 15:56:30,018 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache(157): Instantiating LRU groupby cache of element size: 1000
2016-10-30 15:56:30,018 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver(397): Spillable groupby enabled: true
2016-10-30 15:56:30,018 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache(244): Adding new aggregate bucket for row key \x80\x00\x00\x03
2016-10-30 15:56:30,019 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache$2(363): Adding new distinct group: \x80\x00\x00\x03 with aggregators [Lorg.apache.phoenix.expression.aggregator.Aggregator;@3df3516a value = 
2016-10-30 15:56:30,021 DEBUG [phoenix-7-thread-0] org.apache.phoenix.cache.ServerCacheClient(318): Removing Cache [B@4012629 from servers.
2016-10-30 15:56:30,022 DEBUG [phoenix-7-thread-0] org.apache.phoenix.cache.ServerCacheClient(179): Adding cache entry to be sent for region=TBL_T001055.TBL_T0010551,,1477842976117.54e182838458f72caf4245ceec299d52., hostname=priapus.apache.org,36659,1477842847800, seqNum=5
2016-10-30 15:56:30,024 DEBUG [phoenix-7-thread-0] org.apache.phoenix.cache.ServerCacheClient(291): Cache [B@430d710a successfully added to servers.
2016-10-30 15:56:30,031 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: cbf26e7b-1dca-c81b-f1eb-18e9be7048c1, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,031 DEBUG [phoenix-7-thread-0] org.apache.phoenix.util.ReadOnlyProps(317): Creating new ReadOnlyProps due to phoenix.table.istransactional.default with null!=true
2016-10-30 15:56:30,063 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0xc1acae3 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,074 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75f4c60e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,078 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900bc
2016-10-30 15:56:30,081 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0xc1acae30x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,115 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,115 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0xc1acae3-0x158164c460900bc connected
2016-10-30 15:56:30,115 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t3] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989993000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x03"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,119 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: af1fa162-a030-734e-f33e-101fff3c8a55, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,125 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 7ade0704-700f-da19-da61-2693041fe31c, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,152 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 544a7714-72f3-b965-e6ef-79e1db7e6d5f, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,152 DEBUG [phoenix-7-thread-0] org.apache.phoenix.util.ReadOnlyProps(317): Creating new ReadOnlyProps due to phoenix.table.istransactional.default with null!=true
2016-10-30 15:56:30,196 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x30b47922 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,214 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x30b479220x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,215 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69d4bbe2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,216 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x30b47922-0x158164c460900bd connected
2016-10-30 15:56:30,219 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900bd
2016-10-30 15:56:30,247 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,247 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t4] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"totalColumns":1,"families":{"L#0":[{"timestamp":1477842990149000000,"tag":[],"qualifier":"_0","vlen":2}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,262 DEBUG [phoenix-7-thread-0] org.apache.phoenix.compile.FromCompiler$BaseColumnResolver(572): Re-resolved stale table TBL_T001055.TBL_T0010551 with seqNum 0 at timestamp 1477842989821 with 3 columns: [ID1, 0.FK1A, 0.FK1B]
2016-10-30 15:56:30,263 DEBUG [phoenix-7-thread-0] org.apache.phoenix.execute.BaseQueryPlan(338): Scan ready for iteration: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01","stopRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["FK1A"]},"maxResultSize":-1,"maxVersions":1,"filter":"SkipScanFilter [[\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01, \\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04]]","caching":2147483647}
2016-10-30 15:56:30,263 DEBUG [phoenix-7-thread-0] org.apache.phoenix.execute.BaseQueryPlan(355): Iterator ready: org.apache.phoenix.execute.BaseQueryPlan$1@5556163b
2016-10-30 15:56:30,264 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.BaseResultIterators(680): Getting iterators for ResultIterators [name=PARALLEL,id=d1410137-650f-f9fa-13b4-5238aa667e35,scans=[[{"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01","stopRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00","loadColumnFamiliesOnDemand":null,"totalColumns":2,"cacheBlocks":true,"families":{"0":["FK1A","_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"SkipScanFilter [[\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01, \\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04]]","caching":2147483647}]]]
2016-10-30 15:56:30,267 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver(371): Grouped aggregation over unordered rows with scan {"timeRange":[0,1477842990255000001],"batch":-1,"startRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01","stopRow":"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00","loadColumnFamiliesOnDemand":true,"totalColumns":3,"cacheBlocks":true,"families":{"0":["","FK1A","_0"]},"maxResultSize":2097152,"maxVersions":2147483647,"filter":"CellSkipFilter","caching":2147483647}, group by [TBL_T001055.TBL_T0010551.FK1A], aggregators org.apache.phoenix.expression.aggregator.ServerAggregators [0]:
2016-10-30 15:56:30,267 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache(157): Instantiating LRU groupby cache of element size: 1000
2016-10-30 15:56:30,267 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver(397): Spillable groupby enabled: true
2016-10-30 15:56:30,268 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache(244): Adding new aggregate bucket for row key \x80\x00\x00\x03
2016-10-30 15:56:30,268 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache(244): Adding new aggregate bucket for row key \x80\x00\x00\x01
2016-10-30 15:56:30,268 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache$2(363): Adding new distinct group: \x80\x00\x00\x03 with aggregators [Lorg.apache.phoenix.expression.aggregator.Aggregator;@49959e51 value = 
2016-10-30 15:56:30,268 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.phoenix.cache.aggcache.SpillableGroupByCache$2(363): Adding new distinct group: \x80\x00\x00\x01 with aggregators [Lorg.apache.phoenix.expression.aggregator.Aggregator;@7dccfd94 value = 
2016-10-30 15:56:30,270 DEBUG [phoenix-7-thread-0] org.apache.phoenix.cache.ServerCacheClient(318): Removing Cache [B@3e49a68f from servers.
2016-10-30 15:56:30,271 DEBUG [phoenix-7-thread-0] org.apache.phoenix.cache.ServerCacheClient(179): Adding cache entry to be sent for region=TBL_T001055.TBL_T0010551,,1477842976117.54e182838458f72caf4245ceec299d52., hostname=priapus.apache.org,36659,1477842847800, seqNum=5
2016-10-30 15:56:30,273 DEBUG [phoenix-7-thread-0] org.apache.phoenix.cache.ServerCacheClient(291): Cache [B@7b754b3 successfully added to servers.
2016-10-30 15:56:30,275 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 5ae1c772-dc39-2ebf-65e8-f8abbd0446db, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,275 DEBUG [phoenix-7-thread-0] org.apache.phoenix.util.ReadOnlyProps(317): Creating new ReadOnlyProps due to phoenix.table.istransactional.default with null!=true
2016-10-30 15:56:30,283 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x2d16319c connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,315 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x2d16319c0x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,315 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d9f31f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,316 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x2d16319c-0x158164c460900be connected
2016-10-30 15:56:30,319 INFO  [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900be
2016-10-30 15:56:30,353 DEBUG [B.defaultRpcServer.handler=0,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,355 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t2] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842990255000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x04\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,470 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 127c9e6c-5ae1-574b-a9b2-487a30554ad7, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,502 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 9fbfa539-ac69-69f8-f792-a5db1c35d977, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,524 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4925522f connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,542 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x4925522f0x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,545 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x4925522f-0x158164c460900bf connected
2016-10-30 15:56:30,546 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d282e60, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,551 INFO  [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900bf
2016-10-30 15:56:30,556 DEBUG [B.defaultRpcServer.handler=1,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,557 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t5] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989891000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x03\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x02"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,560 INFO  [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x6c4b5df7 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,565 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22ff52d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,566 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x6c4b5df70x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,567 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x6c4b5df7-0x158164c460900c0 connected
2016-10-30 15:56:30,569 INFO  [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900c0
2016-10-30 15:56:30,572 DEBUG [B.defaultRpcServer.handler=2,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,577 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t6] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842989993000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x03"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,581 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x4f26dd57 connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,590 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x4f26dd570x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,590 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6328647d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,591 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x4f26dd57-0x158164c460900c1 connected
2016-10-30 15:56:30,620 INFO  [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900c1
2016-10-30 15:56:30,632 DEBUG [B.defaultRpcServer.handler=3,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,632 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t7] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842990149000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x02\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x04"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,652 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x778e89ad connecting to ZooKeeper ensemble=localhost:61610
2016-10-30 15:56:30,673 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(602): hconnection-0x778e89ad0x0, quorum=localhost:61610, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
2016-10-30 15:56:30,673 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.AbstractRpcClient(115): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d99d3ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
2016-10-30 15:56:30,674 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659-EventThread] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher(686): hconnection-0x778e89ad-0x158164c460900c2 connected
2016-10-30 15:56:30,677 INFO  [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation(1709): Closing zookeeper sessionid=0x158164c460900c2
2016-10-30 15:56:30,681 DEBUG [B.defaultRpcServer.handler=4,queue=0,port=36659] org.apache.hadoop.hbase.ipc.RpcClientImpl(1157): Stopping rpc client
2016-10-30 15:56:30,681 DEBUG [priapus.apache.org,36659,1477842847800-tx-index-writer--pool2229-t8] org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1(147): Writing index update:[{"ts":9223372036854775807,"totalColumns":1,"families":{"L#0":[{"timestamp":1477842990255000000,"tag":[],"qualifier":"","vlen":0}]},"row":"\\x00\\x00\\xC1\\x04\\x00\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x01"}] to table: TBL_T001055.TBL_T0010551
2016-10-30 15:56:30,687 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 78fddece-3bcd-c0f4-456d-22d038a84052, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
2016-10-30 15:56:30,694 DEBUG [phoenix-7-thread-0] org.apache.phoenix.iterate.ParallelIterators$1(112): Id: 292326cf-5d2f-8662-c664-e76a521f9843, Time: 0ms, Scan: {"timeRange":[0,9223372036854775807],"batch":-1,"startRow":"","stopRow":"","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"L#0":["_0"]},"maxResultSize":-1,"maxVersions":1,"filter":"FirstKeyOnlyFilter","caching":2147483647}
Running [TxCheckpointIT_localIndex=true,mutable=true]
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.723 sec - in [TxCheckpointIT_localIndex=true,mutable=true]
2016-10-30 15:56:30,777 INFO  [Thread-13015] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-10-30 15:56:30,778 INFO  [Thread-162] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
2016-10-30 15:56:30,779 INFO  [Thread-8195] org.apache.phoenix.query.BaseTest$1(488): SHUTDOWN: halting JVM now
Build timed out (after 200 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts

Results :

Tests run: 1640, Failures: 0, Errors: 0, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test (ClientManagedTimeTests) @ phoenix-core ---

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

Compressed 700.08 MB of artifacts by 55.2% relative to #1455
Updating PHOENIX-3424
Recording test results