You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "James Taylor (JIRA)" <ji...@apache.org> on 2018/05/25 00:14:00 UTC

[jira] [Commented] (PHOENIX-3623) Integrate Omid with Phoenix

    [ https://issues.apache.org/jira/browse/PHOENIX-3623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16490036#comment-16490036 ] 

James Taylor commented on PHOENIX-3623:
---------------------------------------

Had a clean run with TEPHRA specified as the default transaction provider with the Omid integration patch. Next ran with OMID as the default and we had a fair number of failure, many of them the same. Had to kill it before completion because it was taking too long. The coprocessor calls need to catch exceptions and rethrow them as non-retriable to prevent the endless retries that HBase does.

FYI, [~ohads]
{code:java}
[INFO] Building jar: /Users/jtaylor/dev/apache/phoenix-omid/phoenix-core/target/phoenix-core-4.14.0-HBase-1.3.jar

[INFO]

[INFO] --- maven-failsafe-plugin:2.20:integration-test (ParallelStatsEnabledTest) @ phoenix-core ---

[INFO]

[INFO] -------------------------------------------------------

[INFO]  T E S T S

[INFO] -------------------------------------------------------

[INFO] Running org.apache.phoenix.coprocessor.StatisticsCollectionRunTrackerIT

[INFO] Running org.apache.phoenix.end2end.MultiCfQueryExecIT

[INFO] Running org.apache.phoenix.end2end.KeyOnlyIT

[INFO] Running org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.643 s - in org.apache.phoenix.end2end.KeyOnlyIT

[INFO] Running org.apache.phoenix.end2end.ParallelIteratorsIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.19 s - in org.apache.phoenix.coprocessor.StatisticsCollectionRunTrackerIT

[INFO] Running org.apache.phoenix.end2end.QueryWithTableSampleIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.553 s - in org.apache.phoenix.end2end.ParallelIteratorsIT

[INFO] Running org.apache.phoenix.end2end.ReadIsolationLevelIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.595 s - in org.apache.phoenix.end2end.ReadIsolationLevelIT

[INFO] Running org.apache.phoenix.end2end.SaltedViewIT

[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.29 s - in org.apache.phoenix.end2end.MultiCfQueryExecIT

[INFO] Running org.apache.phoenix.end2end.TenantSpecificTablesDDLIT

[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.854 s - in org.apache.phoenix.end2end.QueryWithTableSampleIT

[INFO] Running org.apache.phoenix.end2end.TenantSpecificTablesDMLIT

[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.355 s - in org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT

[INFO] Running org.apache.phoenix.end2end.TransactionalViewIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.508 s - in org.apache.phoenix.end2end.TransactionalViewIT

[INFO] Running org.apache.phoenix.end2end.ViewIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 158.337 s - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.366 s - in org.apache.phoenix.end2end.TenantSpecificTablesDMLIT

[INFO] Running org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.604 s - in org.apache.phoenix.end2end.index.ImmutableIndexWithStatsIT

[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 559.532 s <<< FAILURE! - in org.apache.phoenix.end2end.SaltedViewIT

[ERROR] testSaltedUpdatableViewWithLocalIndex[transactional = true](org.apache.phoenix.end2end.SaltedViewIT)  Time elapsed: 539.968 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64724,1527187888249,

at org.apache.phoenix.end2end.SaltedViewIT.testSaltedUpdatableViewWithLocalIndex(SaltedViewIT.java:43)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64724,1527187888249,

at org.apache.phoenix.end2end.SaltedViewIT.testSaltedUpdatableViewWithLocalIndex(SaltedViewIT.java:43)



[ERROR] Tests run: 56, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 827.879 s <<< FAILURE! - in org.apache.phoenix.end2end.ViewIT

[ERROR] testViewUsesTableLocalIndex[transactional = true](org.apache.phoenix.end2end.ViewIT)  Time elapsed: 11.205 s  <<< ERROR!

java.sql.SQLException:

java.util.concurrent.ExecutionException: java.lang.Exception: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



at org.apache.phoenix.end2end.ViewIT.testViewUsesTableIndex(ViewIT.java:535)

at org.apache.phoenix.end2end.ViewIT.testViewUsesTableLocalIndex(ViewIT.java:510)

Caused by: java.util.concurrent.ExecutionException:

java.lang.Exception: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



at org.apache.phoenix.end2end.ViewIT.testViewUsesTableIndex(ViewIT.java:535)

at org.apache.phoenix.end2end.ViewIT.testViewUsesTableLocalIndex(ViewIT.java:510)

Caused by: java.lang.Exception:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more





[ERROR] testViewUsesTableGlobalIndex[transactional = true](org.apache.phoenix.end2end.ViewIT)  Time elapsed: 9.598 s  <<< ERROR!

org.apache.phoenix.exception.PhoenixIOException:

org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: S_T000210.I_T000212,,1527188220441.6a8641f8c67c7cd4f0cd8198d06cdef4.: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:288)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.IllegalArgumentException: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.EncodedColumnsUtil.isReservedColumnQualifier(EncodedColumnsUtil.java:196)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.getReservedQualifier(PTable.java:494)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.access$400(PTable.java:249)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme$3.decode(PTable.java:340)

at org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:229)

at org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)

at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:437)

at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:525)

at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5921)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6084)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5858)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5844)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:116)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:96)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)

... 10 more



at org.apache.phoenix.end2end.ViewIT.testViewUsesTableIndex(ViewIT.java:547)

at org.apache.phoenix.end2end.ViewIT.testViewUsesTableGlobalIndex(ViewIT.java:505)

Caused by: java.util.concurrent.ExecutionException:

org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: S_T000210.I_T000212,,1527188220441.6a8641f8c67c7cd4f0cd8198d06cdef4.: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:288)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.IllegalArgumentException: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.EncodedColumnsUtil.isReservedColumnQualifier(EncodedColumnsUtil.java:196)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.getReservedQualifier(PTable.java:494)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.access$400(PTable.java:249)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme$3.decode(PTable.java:340)

at org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:229)

at org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)

at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:437)

at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:525)

at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5921)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6084)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5858)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5844)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:116)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:96)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)

... 10 more



at org.apache.phoenix.end2end.ViewIT.testViewUsesTableIndex(ViewIT.java:547)

at org.apache.phoenix.end2end.ViewIT.testViewUsesTableGlobalIndex(ViewIT.java:505)

Caused by: org.apache.phoenix.exception.PhoenixIOException:

org.apache.hadoop.hbase.DoNotRetryIOException: S_T000210.I_T000212,,1527188220441.6a8641f8c67c7cd4f0cd8198d06cdef4.: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:288)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.IllegalArgumentException: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.EncodedColumnsUtil.isReservedColumnQualifier(EncodedColumnsUtil.java:196)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.getReservedQualifier(PTable.java:494)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.access$400(PTable.java:249)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme$3.decode(PTable.java:340)

at org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:229)

at org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)

at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:437)

at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:525)

at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5921)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6084)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5858)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5844)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:116)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:96)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)

... 10 more



Caused by: org.apache.phoenix.exception.PhoenixIOException:

org.apache.hadoop.hbase.DoNotRetryIOException: S_T000210.I_T000212,,1527188220441.6a8641f8c67c7cd4f0cd8198d06cdef4.: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:288)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.IllegalArgumentException: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.EncodedColumnsUtil.isReservedColumnQualifier(EncodedColumnsUtil.java:196)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.getReservedQualifier(PTable.java:494)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.access$400(PTable.java:249)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme$3.decode(PTable.java:340)

at org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:229)

at org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)

at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:437)

at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:525)

at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5921)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6084)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5858)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5844)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:116)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:96)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)

... 10 more



Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:

org.apache.hadoop.hbase.DoNotRetryIOException: S_T000210.I_T000212,,1527188220441.6a8641f8c67c7cd4f0cd8198d06cdef4.: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:288)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.IllegalArgumentException: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.EncodedColumnsUtil.isReservedColumnQualifier(EncodedColumnsUtil.java:196)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.getReservedQualifier(PTable.java:494)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.access$400(PTable.java:249)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme$3.decode(PTable.java:340)

at org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:229)

at org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)

at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:437)

at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:525)

at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5921)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6084)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5858)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5844)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:116)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:96)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)

... 10 more



Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:

org.apache.hadoop.hbase.DoNotRetryIOException: S_T000210.I_T000212,,1527188220441.6a8641f8c67c7cd4f0cd8198d06cdef4.: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:288)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.IllegalArgumentException: Negative column qualifier-2146712960 not allowed

at org.apache.phoenix.util.EncodedColumnsUtil.isReservedColumnQualifier(EncodedColumnsUtil.java:196)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.getReservedQualifier(PTable.java:494)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme.access$400(PTable.java:249)

at org.apache.phoenix.schema.PTable$QualifierEncodingScheme$3.decode(PTable.java:340)

at org.apache.phoenix.filter.MultiEncodedCQKeyValueComparisonFilter.filterKeyValue(MultiEncodedCQKeyValueComparisonFilter.java:229)

at org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)

at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:437)

at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:525)

at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5921)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6084)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5858)

at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5844)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:116)

at org.apache.hadoop.hbase.regionserver.OmidRegionScanner.nextRaw(OmidRegionScanner.java:96)

at org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)

... 10 more





[ERROR] testNonSaltedUpdatableViewWithLocalIndex[transactional = true](org.apache.phoenix.end2end.ViewIT)  Time elapsed: 539.82 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49486,1527187992678,

at org.apache.phoenix.end2end.ViewIT.testNonSaltedUpdatableViewWithLocalIndex(ViewIT.java:140)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49486,1527187992678,

at org.apache.phoenix.end2end.ViewIT.testNonSaltedUpdatableViewWithLocalIndex(ViewIT.java:140)



[INFO]

[INFO] Results:

[INFO]

[ERROR] Errors:

[ERROR]   SaltedViewIT.testSaltedUpdatableViewWithLocalIndex:43->BaseViewIT.testUpdatableViewWithIndex:81->BaseViewIT.testUpdatableViewIndex:175 » Commit

[ERROR]   ViewIT.testNonSaltedUpdatableViewWithLocalIndex:140->BaseViewIT.testUpdatableViewWithIndex:81->BaseViewIT.testUpdatableViewIndex:175 » Commit

[ERROR]   ViewIT.testViewUsesTableGlobalIndex:505->testViewUsesTableIndex:547 » PhoenixIO

[ERROR]   ViewIT.testViewUsesTableLocalIndex:510->testViewUsesTableIndex:535 » SQL java....

[INFO]

[ERROR] Tests run: 158, Failures: 0, Errors: 4, Skipped: 0

[INFO]

[INFO]

[INFO] --- maven-failsafe-plugin:2.20:integration-test (ParallelStatsDisabledTest) @ phoenix-core ---

[INFO]

[INFO] -------------------------------------------------------

[INFO]  T E S T S

[INFO] -------------------------------------------------------

[INFO] Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.AggregateIT

[INFO] Running org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.459 s - in org.apache.phoenix.end2end.AbsFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.AlterSessionIT

[INFO] Running org.apache.phoenix.end2end.AggregateQueryIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.559 s - in org.apache.phoenix.end2end.AlterSessionIT

[INFO] Running org.apache.phoenix.end2end.AlterTableIT

[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.361 s - in org.apache.phoenix.end2end.AggregateIT

[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.2 s - in org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT

[INFO] Running org.apache.phoenix.end2end.AppendOnlySchemaIT

[INFO] Running org.apache.phoenix.end2end.AlterTableWithViewsIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.494 s - in org.apache.phoenix.end2end.AppendOnlySchemaIT

[INFO] Running org.apache.phoenix.end2end.ArithmeticQueryIT

[INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.943 s - in org.apache.phoenix.end2end.ArithmeticQueryIT

[INFO] Running org.apache.phoenix.end2end.Array1IT

[INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 266.508 s - in org.apache.phoenix.end2end.AggregateQueryIT

[INFO] Tests run: 27, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.871 s - in org.apache.phoenix.end2end.Array1IT

[INFO] Running org.apache.phoenix.end2end.Array2IT

[INFO] Running org.apache.phoenix.end2end.Array3IT

[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 337.066 s - in org.apache.phoenix.end2end.AlterTableIT

[INFO] Tests run: 27, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.556 s - in org.apache.phoenix.end2end.Array2IT

[INFO] Running org.apache.phoenix.end2end.ArrayConcatFunctionIT

[INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.53 s - in org.apache.phoenix.end2end.Array3IT

[INFO] Running org.apache.phoenix.end2end.ArrayFillFunctionIT

[INFO] Running org.apache.phoenix.end2end.ArrayAppendFunctionIT

[INFO] Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.407 s - in org.apache.phoenix.end2end.ArrayConcatFunctionIT

[INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.489 s - in org.apache.phoenix.end2end.ArrayFillFunctionIT

[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 83.574 s - in org.apache.phoenix.end2end.ArrayAppendFunctionIT

[INFO] Running org.apache.phoenix.end2end.ArrayPrependFunctionIT

[INFO] Running org.apache.phoenix.end2end.ArrayRemoveFunctionIT

[INFO] Running org.apache.phoenix.end2end.ArrayToStringFunctionIT

[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.136 s - in org.apache.phoenix.end2end.ArrayPrependFunctionIT

[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.206 s - in org.apache.phoenix.end2end.ArrayRemoveFunctionIT

[ERROR] Tests run: 52, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 437.709 s <<< FAILURE! - in org.apache.phoenix.end2end.AlterTableWithViewsIT

[ERROR] testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 2.456 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=NONTXNTBL_T0000451

at org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:784)



[ERROR] testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 2.478 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=NONTXNTBL_T0000601

at org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:784)



[ERROR] testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 4.802 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=NONTXNTBL_T0000750

at org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:784)



[ERROR] testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time elapsed: 4.89 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=NONTXNTBL_T0000900

at org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:784)



[INFO] Running org.apache.phoenix.end2end.AutoPartitionViewsIT

[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 s <<< FAILURE! - in org.apache.phoenix.end2end.AutoPartitionViewsIT

[ERROR] org.apache.phoenix.end2end.AutoPartitionViewsIT  Time elapsed: 0 s  <<< ERROR!

java.lang.RuntimeException: java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/ has no authority.

Caused by: java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/ has no authority.



[INFO] Running org.apache.phoenix.end2end.ArraysWithNullsIT

[INFO] Running org.apache.phoenix.end2end.AutoCommitIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.25 s - in org.apache.phoenix.end2end.AutoCommitIT

[INFO] Running org.apache.phoenix.end2end.CSVCommonsLoaderIT

[INFO] Running org.apache.phoenix.end2end.BinaryRowKeyIT

[INFO] Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.688 s - in org.apache.phoenix.end2end.ArrayToStringFunctionIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.774 s - in org.apache.phoenix.end2end.BinaryRowKeyIT

[INFO] Running org.apache.phoenix.end2end.CastAndCoerceIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.006 s - in org.apache.phoenix.end2end.ArraysWithNullsIT

[INFO] Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.129 s - in org.apache.phoenix.end2end.CSVCommonsLoaderIT

[INFO] Running org.apache.phoenix.end2end.CoalesceFunctionIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.085 s - in org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.CollationKeyFunctionIT

[INFO] Running org.apache.phoenix.end2end.CaseStatementIT

[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.301 s - in org.apache.phoenix.end2end.CoalesceFunctionIT

[INFO] Running org.apache.phoenix.end2end.ColumnEncodedBytesPropIT

[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.677 s - in org.apache.phoenix.end2end.CollationKeyFunctionIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.888 s - in org.apache.phoenix.end2end.ColumnEncodedBytesPropIT

[INFO] Running org.apache.phoenix.end2end.ConcurrentMutationsIT

[INFO] Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.743 s - in org.apache.phoenix.end2end.ColumnProjectionOptimizationIT

[INFO] Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.146 s - in org.apache.phoenix.end2end.ConvertTimezoneFunctionIT

[INFO] Running org.apache.phoenix.end2end.CountDistinctApproximateHyperLogLogIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.729 s - in org.apache.phoenix.end2end.CountDistinctApproximateHyperLogLogIT

[INFO] Running org.apache.phoenix.end2end.CreateSchemaIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.522 s - in org.apache.phoenix.end2end.CreateSchemaIT

[INFO] Running org.apache.phoenix.end2end.CreateTableIT

[WARNING] Tests run: 15, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 139.834 s - in org.apache.phoenix.end2end.ConcurrentMutationsIT

[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.125 s - in org.apache.phoenix.end2end.CreateTableIT

[INFO] Running org.apache.phoenix.end2end.CursorWithRowValueConstructorIT

[INFO] Running org.apache.phoenix.end2end.CustomEntityDataIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.839 s - in org.apache.phoenix.end2end.CustomEntityDataIT

[INFO] Running org.apache.phoenix.end2end.DateArithmeticIT

[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.482 s - in org.apache.phoenix.end2end.DateArithmeticIT

[INFO] Running org.apache.phoenix.end2end.DateTimeIT

[INFO] Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.086 s - in org.apache.phoenix.end2end.CursorWithRowValueConstructorIT

[INFO] Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 290.195 s - in org.apache.phoenix.end2end.CastAndCoerceIT

[INFO] Running org.apache.phoenix.end2end.DecodeFunctionIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.532 s - in org.apache.phoenix.end2end.DecodeFunctionIT

[INFO] Running org.apache.phoenix.end2end.DeleteIT

[INFO] Running org.apache.phoenix.end2end.DefaultColumnValueIT

[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.585 s - in org.apache.phoenix.end2end.DefaultColumnValueIT

[INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 374.021 s - in org.apache.phoenix.end2end.CaseStatementIT

[INFO] Running org.apache.phoenix.end2end.DerivedTableIT

[INFO] Running org.apache.phoenix.end2end.DisableLocalIndexIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.574 s - in org.apache.phoenix.end2end.DisableLocalIndexIT

[INFO] Running org.apache.phoenix.end2end.DistinctCountIT

[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.098 s - in org.apache.phoenix.end2end.DistinctCountIT

[INFO] Running org.apache.phoenix.end2end.DistinctPrefixFilterIT

[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 213.552 s - in org.apache.phoenix.end2end.DateTimeIT

[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 168.052 s - in org.apache.phoenix.end2end.DeleteIT

[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.717 s - in org.apache.phoenix.end2end.DerivedTableIT

[INFO] Running org.apache.phoenix.end2end.DynamicFamilyIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.553 s - in org.apache.phoenix.end2end.DynamicFamilyIT

[INFO] Running org.apache.phoenix.end2end.DynamicUpsertIT

[INFO] Running org.apache.phoenix.end2end.DropTableIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.3 s - in org.apache.phoenix.end2end.DropTableIT

[INFO] Running org.apache.phoenix.end2end.EncodeFunctionIT

[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.846 s - in org.apache.phoenix.end2end.DynamicUpsertIT

[INFO] Running org.apache.phoenix.end2end.EvaluationOfORIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.277 s - in org.apache.phoenix.end2end.EvaluationOfORIT

[INFO] Running org.apache.phoenix.end2end.ExecuteStatementsIT

[INFO] Running org.apache.phoenix.end2end.DynamicColumnIT

[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.409 s - in org.apache.phoenix.end2end.EncodeFunctionIT

[INFO] Running org.apache.phoenix.end2end.ExpFunctionEnd2EndIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.819 s - in org.apache.phoenix.end2end.ExecuteStatementsIT

[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.164 s - in org.apache.phoenix.end2end.DistinctPrefixFilterIT

[INFO] Running org.apache.phoenix.end2end.ExtendedQueryExecIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.835 s - in org.apache.phoenix.end2end.ExpFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.FirstValueFunctionIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.149 s - in org.apache.phoenix.end2end.ExtendedQueryExecIT

[INFO] Running org.apache.phoenix.end2end.ExplainPlanWithStatsDisabledIT

[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.84 s - in org.apache.phoenix.end2end.FirstValueFunctionIT

[INFO] Running org.apache.phoenix.end2end.FlappingAlterTableIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.237 s - in org.apache.phoenix.end2end.DynamicColumnIT

[INFO] Running org.apache.phoenix.end2end.FunkyNamesIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.802 s - in org.apache.phoenix.end2end.FunkyNamesIT

[INFO] Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.FirstValuesFunctionIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.296 s - in org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.286 s - in org.apache.phoenix.end2end.FirstValuesFunctionIT

[INFO] Running org.apache.phoenix.end2end.ImmutableTablePropertiesIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.337 s - in org.apache.phoenix.end2end.FlappingAlterTableIT

[INFO] Running org.apache.phoenix.end2end.InListIT

[INFO] Running org.apache.phoenix.end2end.GroupByIT

[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.577 s - in org.apache.phoenix.end2end.ImmutableTablePropertiesIT

[INFO] Running org.apache.phoenix.end2end.InQueryIT

[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.87 s - in org.apache.phoenix.end2end.ExplainPlanWithStatsDisabledIT

[INFO] Running org.apache.phoenix.end2end.InstrFunctionIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.109 s - in org.apache.phoenix.end2end.InstrFunctionIT

[INFO] Running org.apache.phoenix.end2end.IntArithmeticIT

[INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 373.907 s - in org.apache.phoenix.end2end.InQueryIT

[INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 406.503 s - in org.apache.phoenix.end2end.GroupByIT

[INFO] Running org.apache.phoenix.end2end.IsNullIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.821 s - in org.apache.phoenix.end2end.IsNullIT

[INFO] Running org.apache.phoenix.end2end.LastValuesFunctionIT

[INFO] Running org.apache.phoenix.end2end.LastValueFunctionIT

[INFO] Tests run: 70, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 415.412 s - in org.apache.phoenix.end2end.IntArithmeticIT

[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.897 s - in org.apache.phoenix.end2end.LastValueFunctionIT

[INFO] Running org.apache.phoenix.end2end.LnLogFunctionEnd2EndIT

[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.699 s - in org.apache.phoenix.end2end.LastValuesFunctionIT

[INFO] Running org.apache.phoenix.end2end.MD5FunctionIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.607 s - in org.apache.phoenix.end2end.LnLogFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.MapReduceIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.767 s - in org.apache.phoenix.end2end.MD5FunctionIT

[INFO] Running org.apache.phoenix.end2end.MappingTableDataTypeIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.63 s - in org.apache.phoenix.end2end.MappingTableDataTypeIT

[INFO] Running org.apache.phoenix.end2end.MetaDataEndPointIT

[INFO] Running org.apache.phoenix.end2end.LikeExpressionIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.528 s - in org.apache.phoenix.end2end.MapReduceIT

[INFO] Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.277 s - in org.apache.phoenix.end2end.MinMaxAggregateFunctionIT

[INFO] Running org.apache.phoenix.end2end.ModulusExpressionIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.508 s - in org.apache.phoenix.end2end.MetaDataEndPointIT

[INFO] Running org.apache.phoenix.end2end.MutationStateIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.917 s - in org.apache.phoenix.end2end.MutationStateIT

[INFO] Running org.apache.phoenix.end2end.NamespaceSchemaMappingIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.856 s - in org.apache.phoenix.end2end.NamespaceSchemaMappingIT

[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.231 s - in org.apache.phoenix.end2end.ModulusExpressionIT

[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.319 s - in org.apache.phoenix.end2end.LikeExpressionIT

[INFO] Running org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT

[INFO] Running org.apache.phoenix.end2end.NativeHBaseTypesIT

[INFO] Running org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT

[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.707 s - in org.apache.phoenix.end2end.NativeHBaseTypesIT

[INFO] Running org.apache.phoenix.end2end.NthValueFunctionIT

[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.786 s - in org.apache.phoenix.end2end.NthValueFunctionIT

[INFO] Running org.apache.phoenix.end2end.NullIT

[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 638.896 s - in org.apache.phoenix.end2end.InListIT

[INFO] Running org.apache.phoenix.end2end.NumericArithmeticIT

[INFO] Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.902 s - in org.apache.phoenix.end2end.NumericArithmeticIT

[INFO] Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT

[INFO] Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 170.22 s - in org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.262 s - in org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.OrderByIT

[INFO] Running org.apache.phoenix.end2end.OnDuplicateKeyIT

[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 278.088 s - in org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT

[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.918 s - in org.apache.phoenix.end2end.OrderByIT

[INFO] Running org.apache.phoenix.end2end.PartialScannerResultsDisabledIT

[INFO] Running org.apache.phoenix.end2end.PercentileIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.709 s - in org.apache.phoenix.end2end.PartialScannerResultsDisabledIT

[INFO] Running org.apache.phoenix.end2end.PhoenixRuntimeIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.695 s - in org.apache.phoenix.end2end.PhoenixRuntimeIT

[INFO] Running org.apache.phoenix.end2end.PointInTimeQueryIT

[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.92 s - in org.apache.phoenix.end2end.PercentileIT

[INFO] Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.627 s - in org.apache.phoenix.end2end.PowerFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.PrimitiveTypeIT

[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.349 s - in org.apache.phoenix.end2end.PrimitiveTypeIT

[INFO] Running org.apache.phoenix.end2end.ProductMetricsIT

[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 332.965 s - in org.apache.phoenix.end2end.NullIT

[INFO] Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT

[INFO] Tests run: 55, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 125.704 s - in org.apache.phoenix.end2end.ProductMetricsIT

[INFO] Tests run: 48, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 305.651 s - in org.apache.phoenix.end2end.OnDuplicateKeyIT

[INFO] Running org.apache.phoenix.end2end.QueryExecWithoutSCNIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.292 s - in org.apache.phoenix.end2end.QueryExecWithoutSCNIT

[INFO] Running org.apache.phoenix.end2end.QueryMoreIT

[INFO] Running org.apache.phoenix.end2end.QueryIT

[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.238 s - in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT

[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.421 s - in org.apache.phoenix.end2end.QueryMoreIT

[INFO] Running org.apache.phoenix.end2end.RTrimFunctionIT

[INFO] Running org.apache.phoenix.end2end.QueryWithOffsetIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.551 s - in org.apache.phoenix.end2end.RTrimFunctionIT

[INFO] Running org.apache.phoenix.end2end.RangeScanIT

[INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 252.464 s - in org.apache.phoenix.end2end.PointInTimeQueryIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.565 s - in org.apache.phoenix.end2end.QueryWithOffsetIT

[INFO] Running org.apache.phoenix.end2end.RegexpReplaceFunctionIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.534 s - in org.apache.phoenix.end2end.RegexpReplaceFunctionIT

[INFO] Running org.apache.phoenix.end2end.RegexpSplitFunctionIT

[INFO] Running org.apache.phoenix.end2end.ReadOnlyIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.649 s - in org.apache.phoenix.end2end.ReadOnlyIT

[INFO] Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT

[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.877 s - in org.apache.phoenix.end2end.RegexpSplitFunctionIT

[INFO] Running org.apache.phoenix.end2end.ReverseFunctionIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.535 s - in org.apache.phoenix.end2end.RegexpSubstrFunctionIT

[INFO] Running org.apache.phoenix.end2end.ReverseScanIT

[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.378 s - in org.apache.phoenix.end2end.ReverseFunctionIT

[INFO] Running org.apache.phoenix.end2end.RoundFloorCeilFuncIT

[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.652 s - in org.apache.phoenix.end2end.ReverseScanIT

[INFO] Running org.apache.phoenix.end2end.RowTimestampIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.462 s - in org.apache.phoenix.end2end.RowTimestampIT

[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.444 s - in org.apache.phoenix.end2end.RoundFloorCeilFuncIT

[INFO] Running org.apache.phoenix.end2end.RowValueConstructorIT

[INFO] Running org.apache.phoenix.end2end.SequenceBulkAllocationIT

[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.728 s - in org.apache.phoenix.end2end.SequenceBulkAllocationIT

[INFO] Running org.apache.phoenix.end2end.SequenceIT

[INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 247.33 s - in org.apache.phoenix.end2end.QueryIT

[INFO] Tests run: 55, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.288 s - in org.apache.phoenix.end2end.SequenceIT

[INFO] Running org.apache.phoenix.end2end.ServerExceptionIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.252 s - in org.apache.phoenix.end2end.ServerExceptionIT

[INFO] Running org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT

[INFO] Running org.apache.phoenix.end2end.SerialIteratorsIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.281 s - in org.apache.phoenix.end2end.SerialIteratorsIT

[INFO] Running org.apache.phoenix.end2end.SetPropertyOnNonEncodedTableIT

[INFO] Tests run: 47, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 125.281 s - in org.apache.phoenix.end2end.RowValueConstructorIT

[INFO] Running org.apache.phoenix.end2end.SignFunctionEnd2EndIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.063 s - in org.apache.phoenix.end2end.SignFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.SkipScanAfterManualSplitIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.263 s - in org.apache.phoenix.end2end.SkipScanAfterManualSplitIT

[INFO] Running org.apache.phoenix.end2end.SkipScanQueryIT

[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 331.874 s - in org.apache.phoenix.end2end.RangeScanIT

[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.893 s - in org.apache.phoenix.end2end.SkipScanQueryIT

[INFO] Running org.apache.phoenix.end2end.SortOrderIT

[INFO] Running org.apache.phoenix.end2end.SortMergeJoinMoreIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.565 s - in org.apache.phoenix.end2end.SortMergeJoinMoreIT

[INFO] Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.405 s - in org.apache.phoenix.end2end.SpooledTmpFileDeleteIT

[INFO] Running org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.098 s - in org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT

[INFO] Running org.apache.phoenix.end2end.StatementHintsIT

[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 196.189 s - in org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.283 s - in org.apache.phoenix.end2end.StatementHintsIT

[INFO] Running org.apache.phoenix.end2end.StoreNullsIT

[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 195.996 s - in org.apache.phoenix.end2end.SetPropertyOnNonEncodedTableIT

[INFO] Running org.apache.phoenix.end2end.StddevIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.14 s - in org.apache.phoenix.end2end.StddevIT

[INFO] Running org.apache.phoenix.end2end.StringIT

[INFO] Running org.apache.phoenix.end2end.StoreNullsPropIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.269 s - in org.apache.phoenix.end2end.StoreNullsPropIT

[INFO] Running org.apache.phoenix.end2end.StringToArrayFunctionIT

[INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.479 s - in org.apache.phoenix.end2end.SortOrderIT

[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.566 s - in org.apache.phoenix.end2end.StringIT

[INFO] Running org.apache.phoenix.end2end.TenantSpecificViewIndexIT

[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.518 s - in org.apache.phoenix.end2end.StringToArrayFunctionIT

[INFO] Running org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT

[INFO] Running org.apache.phoenix.end2end.TenantIdTypeIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.619 s - in org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT

[INFO] Running org.apache.phoenix.end2end.TimezoneOffsetFunctionIT

[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.859 s - in org.apache.phoenix.end2end.TimezoneOffsetFunctionIT

[INFO] Running org.apache.phoenix.end2end.ToCharFunctionIT

[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.815 s - in org.apache.phoenix.end2end.TenantSpecificViewIndexIT

[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.358 s - in org.apache.phoenix.end2end.TenantIdTypeIT

[INFO] Running org.apache.phoenix.end2end.ToNumberFunctionIT

[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 s - in org.apache.phoenix.end2end.ToNumberFunctionIT

[INFO] Running org.apache.phoenix.end2end.TopNIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.017 s - in org.apache.phoenix.end2end.TopNIT

[INFO] Running org.apache.phoenix.end2end.TruncateFunctionIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.239 s - in org.apache.phoenix.end2end.TruncateFunctionIT

[INFO] Running org.apache.phoenix.end2end.UngroupedIT

[INFO] Running org.apache.phoenix.end2end.ToDateFunctionIT

[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.722 s - in org.apache.phoenix.end2end.ToCharFunctionIT

[INFO] Running org.apache.phoenix.end2end.UnionAllIT

[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.831 s - in org.apache.phoenix.end2end.ToDateFunctionIT

[INFO] Running org.apache.phoenix.end2end.UpgradeIT

[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 231.841 s - in org.apache.phoenix.end2end.StoreNullsIT

[INFO] Running org.apache.phoenix.end2end.UpperLowerFunctionIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 s - in org.apache.phoenix.end2end.UpperLowerFunctionIT

[INFO] Running org.apache.phoenix.end2end.UpsertBigValuesIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.205 s - in org.apache.phoenix.end2end.UpsertBigValuesIT

[INFO] Running org.apache.phoenix.end2end.UpsertSelectAutoCommitIT

[INFO] Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.339 s - in org.apache.phoenix.end2end.UnionAllIT

[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.006 s - in org.apache.phoenix.end2end.UpsertSelectAutoCommitIT

[INFO] Running org.apache.phoenix.end2end.UpsertValuesIT

[INFO] Running org.apache.phoenix.end2end.UpsertSelectIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.168 s - in org.apache.phoenix.end2end.UpsertValuesIT

[INFO] Running org.apache.phoenix.end2end.UseSchemaIT

[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.394 s - in org.apache.phoenix.end2end.UseSchemaIT

[INFO] Running org.apache.phoenix.end2end.VariableLengthPKIT

[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 207.421 s - in org.apache.phoenix.end2end.UngroupedIT

[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 224.146 s - in org.apache.phoenix.end2end.UpgradeIT

[INFO] Running org.apache.phoenix.end2end.index.AsyncIndexDisabledIT

[INFO] Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.012 s - in org.apache.phoenix.end2end.UpsertSelectIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.515 s - in org.apache.phoenix.end2end.index.AsyncIndexDisabledIT

[INFO] Running org.apache.phoenix.end2end.index.DropMetadataIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.72 s - in org.apache.phoenix.end2end.index.DropMetadataIT

[INFO] Running org.apache.phoenix.end2end.index.GlobalImmutableNonTxIndexIT

[INFO] Running org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT

[INFO] Running org.apache.phoenix.end2end.index.DropColumnIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.292 s - in org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT

[INFO] Running org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT

[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.216 s - in org.apache.phoenix.end2end.VariableLengthPKIT

[INFO] Running org.apache.phoenix.end2end.index.GlobalIndexOptimizationIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.306 s - in org.apache.phoenix.end2end.index.GlobalIndexOptimizationIT

[INFO] Running org.apache.phoenix.end2end.index.GlobalMutableNonTxIndexIT

[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 181.641 s - in org.apache.phoenix.end2end.index.GlobalImmutableNonTxIndexIT

[INFO] Running org.apache.phoenix.end2end.index.GlobalMutableTxIndexIT

[ERROR] Tests run: 40, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 198.354 s <<< FAILURE! - in org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT

[ERROR] testCreateIndexAfterUpsertStarted[GlobalImmutableTxIndexIT_localIndex=false,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT)  Time elapsed: 4.781 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<3>



[ERROR] testCreateIndexAfterUpsertStartedTxnl[GlobalImmutableTxIndexIT_localIndex=false,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT)  Time elapsed: 4.801 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<3>



[ERROR] testCreateIndexAfterUpsertStarted[GlobalImmutableTxIndexIT_localIndex=false,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT)  Time elapsed: 4.765 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<3>



[ERROR] testCreateIndexAfterUpsertStartedTxnl[GlobalImmutableTxIndexIT_localIndex=false,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.GlobalImmutableTxIndexIT)  Time elapsed: 4.784 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<3>



[INFO] Running org.apache.phoenix.end2end.index.IndexMaintenanceIT

[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 181.364 s - in org.apache.phoenix.end2end.index.GlobalMutableNonTxIndexIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 319.405 s - in org.apache.phoenix.end2end.index.DropColumnIT

[INFO] Running org.apache.phoenix.end2end.index.IndexMetadataIT

[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.929 s - in org.apache.phoenix.end2end.index.IndexMaintenanceIT

[INFO] Running org.apache.phoenix.end2end.index.IndexWithTableSchemaChangeIT

[INFO] Running org.apache.phoenix.end2end.index.IndexUsageIT

[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 198.404 s - in org.apache.phoenix.end2end.index.GlobalMutableTxIndexIT

[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.95 s - in org.apache.phoenix.end2end.index.IndexMetadataIT

[INFO] Running org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT

[INFO] Running org.apache.phoenix.end2end.index.LocalImmutableNonTxIndexIT

[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 171.634 s - in org.apache.phoenix.end2end.index.IndexWithTableSchemaChangeIT

[INFO] Running org.apache.phoenix.end2end.index.LocalMutableNonTxIndexIT

[INFO] Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 235.033 s - in org.apache.phoenix.end2end.index.IndexUsageIT

[INFO] Running org.apache.phoenix.end2end.index.LocalMutableTxIndexIT

[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 339.321 s - in org.apache.phoenix.end2end.index.LocalImmutableNonTxIndexIT

[INFO] Running org.apache.phoenix.end2end.index.MutableIndexIT

[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 339.244 s - in org.apache.phoenix.end2end.index.LocalMutableNonTxIndexIT

[INFO] Running org.apache.phoenix.end2end.index.SaltedIndexIT

[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.958 s - in org.apache.phoenix.end2end.index.SaltedIndexIT

[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT

[WARNING] Tests run: 14, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 80.195 s - in org.apache.phoenix.end2end.index.ViewIndexIT

[INFO] Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT

[ERROR] Tests run: 6, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 1,647.752 s <<< FAILURE! - in org.apache.phoenix.end2end.index.txn.MutableRollbackIT

[ERROR] testCheckpointAndRollback[MutableRollbackIT_localIndex=false](org.apache.phoenix.end2end.index.txn.MutableRollbackIT)  Time elapsed: 4.785 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: java.lang.IllegalArgumentException: Timestamp not allowed in transactional user operations

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testCheckpointAndRollback(MutableRollbackIT.java:475)

Caused by: java.lang.IllegalArgumentException: Timestamp not allowed in transactional user operations

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testCheckpointAndRollback(MutableRollbackIT.java:475)



[ERROR] testRollbackOfUncommittedExistingRowKeyIndexUpdate[MutableRollbackIT_localIndex=true](org.apache.phoenix.end2end.index.txn.MutableRollbackIT)  Time elapsed: 548.632 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testRollbackOfUncommittedExistingRowKeyIndexUpdate(MutableRollbackIT.java:219)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testRollbackOfUncommittedExistingRowKeyIndexUpdate(MutableRollbackIT.java:219)



[ERROR] testCheckpointAndRollback[MutableRollbackIT_localIndex=true](org.apache.phoenix.end2end.index.txn.MutableRollbackIT)  Time elapsed: 539.939 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testCheckpointAndRollback(MutableRollbackIT.java:457)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testCheckpointAndRollback(MutableRollbackIT.java:457)



[ERROR] testMultiRollbackOfUncommittedExistingRowKeyIndexUpdate[MutableRollbackIT_localIndex=true](org.apache.phoenix.end2end.index.txn.MutableRollbackIT)  Time elapsed: 539.961 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testMultiRollbackOfUncommittedExistingRowKeyIndexUpdate(MutableRollbackIT.java:354)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testMultiRollbackOfUncommittedExistingRowKeyIndexUpdate(MutableRollbackIT.java:354)



[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT

[ERROR] Tests run: 8, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 2,178.52 s <<< FAILURE! - in org.apache.phoenix.end2end.index.txn.RollbackIT

[ERROR] testRollbackOfUncommittedRowKeyIndexInsert[RollbackIT_localIndex=true,mutable=false](org.apache.phoenix.end2end.index.txn.RollbackIT)  Time elapsed: 539.765 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedRowKeyIndexInsert(RollbackIT.java:128)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedRowKeyIndexInsert(RollbackIT.java:128)



[ERROR] testRollbackOfUncommittedKeyValueIndexInsert[RollbackIT_localIndex=true,mutable=false](org.apache.phoenix.end2end.index.txn.RollbackIT)  Time elapsed: 540.148 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedKeyValueIndexInsert(RollbackIT.java:85)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedKeyValueIndexInsert(RollbackIT.java:85)



[ERROR] testRollbackOfUncommittedRowKeyIndexInsert[RollbackIT_localIndex=true,mutable=true](org.apache.phoenix.end2end.index.txn.RollbackIT)  Time elapsed: 539.998 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedRowKeyIndexInsert(RollbackIT.java:128)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedRowKeyIndexInsert(RollbackIT.java:128)



[ERROR] testRollbackOfUncommittedKeyValueIndexInsert[RollbackIT_localIndex=true,mutable=true](org.apache.phoenix.end2end.index.txn.RollbackIT)  Time elapsed: 539.783 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedKeyValueIndexInsert(RollbackIT.java:85)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,63952,1527193059235,

at org.apache.phoenix.end2end.index.txn.RollbackIT.testRollbackOfUncommittedKeyValueIndexInsert(RollbackIT.java:85)



[INFO] Running org.apache.phoenix.end2end.join.HashJoinCacheIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.659 s - in org.apache.phoenix.end2end.join.HashJoinCacheIT

[INFO] Running org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT

[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 322.576 s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT

[INFO] Running org.apache.phoenix.end2end.join.HashJoinLocalIndexIT

[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 589.149 s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT

[ERROR] Tests run: 120, Failures: 2, Errors: 10, Skipped: 0, Time elapsed: 5,134.342 s <<< FAILURE! - in org.apache.phoenix.end2end.index.MutableIndexIT

[ERROR] testMultipleUpdatesToSingleRow[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=false](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 539.945 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testMultipleUpdatesToSingleRow(MutableIndexIT.java:476)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testMultipleUpdatesToSingleRow(MutableIndexIT.java:476)



[ERROR] testCompoundIndexKey[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=false](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 559.311 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCompoundIndexKey(MutableIndexIT.java:354)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCompoundIndexKey(MutableIndexIT.java:354)



[ERROR] testCoveredColumns[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=false](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 540.015 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumns(MutableIndexIT.java:245)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumns(MutableIndexIT.java:245)



[ERROR] testCoveredColumnUpdates[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=false](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 8.92 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumnUpdates(MutableIndexIT.java:144)



[ERROR] testUpsertingNullForIndexedColumns[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=false](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 540.109 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testUpsertingNullForIndexedColumns(MutableIndexIT.java:544)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testUpsertingNullForIndexedColumns(MutableIndexIT.java:544)



[ERROR] testIndexHalfStoreFileReader[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=false](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 8.84 s  <<< ERROR!

java.sql.SQLException:

java.util.concurrent.ExecutionException: java.lang.Exception: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



at org.apache.phoenix.end2end.index.MutableIndexIT.testIndexHalfStoreFileReader(MutableIndexIT.java:664)

Caused by: java.util.concurrent.ExecutionException:

java.lang.Exception: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



at org.apache.phoenix.end2end.index.MutableIndexIT.testIndexHalfStoreFileReader(MutableIndexIT.java:664)

Caused by: java.lang.Exception:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more





[ERROR] testMultipleUpdatesToSingleRow[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 559.67 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testMultipleUpdatesToSingleRow(MutableIndexIT.java:476)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testMultipleUpdatesToSingleRow(MutableIndexIT.java:476)



[ERROR] testCompoundIndexKey[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 540.007 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCompoundIndexKey(MutableIndexIT.java:354)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCompoundIndexKey(MutableIndexIT.java:354)



[ERROR] testCoveredColumns[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 539.811 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumns(MutableIndexIT.java:245)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumns(MutableIndexIT.java:245)



[ERROR] testCoveredColumnUpdates[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 8.891 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumnUpdates(MutableIndexIT.java:144)



[ERROR] testUpsertingNullForIndexedColumns[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 540.112 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testUpsertingNullForIndexedColumns(MutableIndexIT.java:544)

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,64153,1527192830635,

at org.apache.phoenix.end2end.index.MutableIndexIT.testUpsertingNullForIndexedColumns(MutableIndexIT.java:544)



[ERROR] testIndexHalfStoreFileReader[MutableIndexIT_localIndex=true,transactional=OMID,columnEncoded=true](org.apache.phoenix.end2end.index.MutableIndexIT)  Time elapsed: 8.796 s  <<< ERROR!

java.sql.SQLException:

java.util.concurrent.ExecutionException: java.lang.Exception: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



at org.apache.phoenix.end2end.index.MutableIndexIT.testIndexHalfStoreFileReader(MutableIndexIT.java:664)

Caused by: java.util.concurrent.ExecutionException:

java.lang.Exception: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



at org.apache.phoenix.end2end.index.MutableIndexIT.testIndexHalfStoreFileReader(MutableIndexIT.java:664)

Caused by: java.lang.Exception:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more



Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:

org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:109)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:80)

at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:8414)

at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8086)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2068)

at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2050)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34954)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: org.apache.thrift.transport.TTransportException: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)

at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)

at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)

at org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:278)

at org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:229)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1028)

at org.apache.tephra.distributed.thrift.TTransaction$TTransactionStandardScheme.read(TTransaction.java:1021)

at org.apache.tephra.distributed.thrift.TTransaction.read(TTransaction.java:921)

at org.apache.thrift.TDeserializer.deserialize(TDeserializer.java:69)

at org.apache.tephra.TransactionCodec.decode(TransactionCodec.java:51)

at org.apache.phoenix.transaction.TephraTransactionContext.<init>(TephraTransactionContext.java:71)

at org.apache.phoenix.transaction.TephraTransactionProvider.getTransactionContext(TephraTransactionProvider.java:64)

at org.apache.phoenix.transaction.TransactionFactory.getTransactionContext(TransactionFactory.java:70)

at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:55)

at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:113)

at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:76)

... 9 more





[INFO] Running org.apache.phoenix.end2end.join.HashJoinMoreIT

[INFO] Running org.apache.phoenix.end2end.join.HashJoinNoIndexIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.381 s - in org.apache.phoenix.end2end.join.HashJoinMoreIT

[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT

[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 186.206 s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT

[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT

[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 332.655 s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT

[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT

[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 191.594 s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT

[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT

[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 589.197 s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT

[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT

[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 228.221 s - in org.apache.phoenix.end2end.join.SubqueryIT

[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.601 s - in org.apache.phoenix.end2end.salted.SaltedTableIT

[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT

[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.345 s - in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT

[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT

[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.255 s - in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT

[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.603 s - in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT

[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT

[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.477 s - in org.apache.phoenix.iterate.RoundRobinResultIteratorIT

[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 s - in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT

[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT

[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 163.394 s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT

[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.635 s - in org.apache.phoenix.rpc.UpdateCacheIT

[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT

[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT

[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.281 s - in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT

[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT

[ERROR] Tests run: 4, Failures: 2, Errors: 1, Skipped: 0, Time elapsed: 10.718 s <<< FAILURE! - in org.apache.phoenix.tx.FlappingTransactionIT

[ERROR] testInflightUpdateNotSeen(org.apache.phoenix.tx.FlappingTransactionIT)  Time elapsed: 2.589 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.FlappingTransactionIT.testInflightUpdateNotSeen(FlappingTransactionIT.java:140)



[ERROR] testExternalTxContext(org.apache.phoenix.tx.FlappingTransactionIT)  Time elapsed: 3.358 s  <<< ERROR!

java.sql.SQLException: ERROR 1092 (44A23): Cannot mix transaction providers:  TEPHRA and OMID

at org.apache.phoenix.tx.FlappingTransactionIT.testExternalTxContext(FlappingTransactionIT.java:241)



[ERROR] testInflightDeleteNotSeen(org.apache.phoenix.tx.FlappingTransactionIT)  Time elapsed: 2.387 s  <<< FAILURE!

java.lang.AssertionError: expected:<2> but was:<1>

at org.apache.phoenix.tx.FlappingTransactionIT.testInflightDeleteNotSeen(FlappingTransactionIT.java:193)



[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT

[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.098 s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT

[INFO] Running org.apache.phoenix.tx.TransactionIT

[ERROR] Tests run: 8, Failures: 0, Errors: 8, Skipped: 0, Time elapsed: 0.003 s <<< FAILURE! - in org.apache.phoenix.tx.TransactionIT

[ERROR] testProperties[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testRowTimestampDisabled[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testReCreateTxnTableAfterDroppingExistingNonTxnTable[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testCheckpointAndRollback[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testQueryWithSCN[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testTransactionalTableMetadata[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testColConflicts[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0.001 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[ERROR] testOnDupKeyForTransactionalTable[TransactionIT_provider=TEPHRA](org.apache.phoenix.tx.TransactionIT)  Time elapsed: 0 s  <<< ERROR!

java.lang.IllegalArgumentException: wrong number of arguments



[INFO] Running org.apache.phoenix.tx.TxCheckpointIT

[ERROR] Tests run: 52, Failures: 8, Errors: 4, Skipped: 0, Time elapsed: 178.933 s <<< FAILURE! - in org.apache.phoenix.tx.ParameterizedTransactionIT

[ERROR] testCreateTableToBeTransactional[TransactionIT_mutable=false,columnEncoded=false](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 2.283 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testCreateTableToBeTransactional(ParameterizedTransactionIT.java:379)



[ERROR] testNonTxToTxTableFailure[TransactionIT_mutable=false,columnEncoded=false](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 8.725 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTableFailure(ParameterizedTransactionIT.java:349)



[ERROR] testNonTxToTxTable[TransactionIT_mutable=false,columnEncoded=false](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 4.514 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=T001797

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:282)



[ERROR] testCreateTableToBeTransactional[TransactionIT_mutable=false,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 2.256 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testCreateTableToBeTransactional(ParameterizedTransactionIT.java:379)



[ERROR] testNonTxToTxTableFailure[TransactionIT_mutable=false,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 8.708 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTableFailure(ParameterizedTransactionIT.java:349)



[ERROR] testNonTxToTxTable[TransactionIT_mutable=false,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 4.502 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=T001815

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:282)



[ERROR] testCreateTableToBeTransactional[TransactionIT_mutable=true,columnEncoded=false](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 2.246 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testCreateTableToBeTransactional(ParameterizedTransactionIT.java:379)



[ERROR] testNonTxToTxTableFailure[TransactionIT_mutable=true,columnEncoded=false](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 8.672 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTableFailure(ParameterizedTransactionIT.java:349)



[ERROR] testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=false](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 4.522 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=T001833

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:282)



[ERROR] testCreateTableToBeTransactional[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 2.254 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testCreateTableToBeTransactional(ParameterizedTransactionIT.java:379)



[ERROR] testNonTxToTxTableFailure[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 8.701 s  <<< FAILURE!

java.lang.AssertionError

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTableFailure(ParameterizedTransactionIT.java:349)



[ERROR] testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)  Time elapsed: 4.5 s  <<< ERROR!

java.sql.SQLException: ERROR 1093 (44A24): Cannot alter table from non transactional to transactional for  OMID.  tableName=T001851

at org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:282)



[INFO] Running org.apache.phoenix.util.IndexScrutinyIT

[ERROR] Tests run: 40, Failures: 20, Errors: 12, Skipped: 0, Time elapsed: 6,794.417 s <<< FAILURE! - in org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT

[ERROR] testIndexWithDecimalCol[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 9.434 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableFixedWithCols[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.86 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableDateCol[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.84 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithCaseSensitiveCols[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 540.509 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testSelectAllAndAliasWithIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.735 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testDeleteFromAllPKColumnIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.459 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testMultipleUpdatesAcrossRegions[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 559.478 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testSelectDistinctOnTableWithSecondaryImmutableIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.884 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testCreateIndexAfterUpsertStarted[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.975 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<0>



[ERROR] testCreateIndexAfterUpsertStartedTxnl[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.946 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<0>



[ERROR] testInClauseWithIndexOnColumnOfUsignedIntType[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.877 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testDeleteFromNonPKColumnIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.854 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testGroupByCount[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.832 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testSelectCF[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.864 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testUpsertAfterIndexDrop[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.482 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testReturnedTimestamp[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.823 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testIndexWithDecimalCol[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.924 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableFixedWithCols[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.895 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableDateCol[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.842 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithCaseSensitiveCols[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 559.773 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testSelectAllAndAliasWithIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.695 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testDeleteFromAllPKColumnIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.893 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testMultipleUpdatesAcrossRegions[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 560.102 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testSelectDistinctOnTableWithSecondaryImmutableIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.844 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testCreateIndexAfterUpsertStarted[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.889 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<0>



[ERROR] testCreateIndexAfterUpsertStartedTxnl[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.9 s  <<< FAILURE!

java.lang.AssertionError: expected:<4> but was:<0>



[ERROR] testInClauseWithIndexOnColumnOfUsignedIntType[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.845 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testDeleteFromNonPKColumnIndex[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.829 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testGroupByCount[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 8.865 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testSelectCF[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.648 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testUpsertAfterIndexDrop[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.803 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[ERROR] testReturnedTimestamp[LocalImmutableTxIndexIT_localIndex=true,mutable=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalImmutableTxIndexIT)  Time elapsed: 539.528 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,49254,1527192373433,



[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.689 s - in org.apache.phoenix.util.IndexScrutinyIT

[ERROR] Tests run: 40, Failures: 16, Errors: 16, Skipped: 0, Time elapsed: 8,940.178 s <<< FAILURE! - in org.apache.phoenix.end2end.index.LocalMutableTxIndexIT

[ERROR] testIndexWithDecimalCol[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 9.754 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableFixedWithCols[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.902 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableDateCol[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.874 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithCaseSensitiveCols[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.94 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testSelectAllAndAliasWithIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 559.906 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testDeleteFromAllPKColumnIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 9.035 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testMultipleUpdatesAcrossRegions[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.442 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testSelectDistinctOnTableWithSecondaryImmutableIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.887 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testCreateIndexAfterUpsertStarted[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 540.094 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testCreateIndexAfterUpsertStartedTxnl[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 540.107 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testInClauseWithIndexOnColumnOfUsignedIntType[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.966 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testDeleteFromNonPKColumnIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.927 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testGroupByCount[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.897 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testSelectCF[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 559.977 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testUpsertAfterIndexDrop[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.919 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testReturnedTimestamp[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 540.388 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testIndexWithDecimalCol[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.893 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableFixedWithCols[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.882 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithNullableDateCol[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.873 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testIndexWithCaseSensitiveCols[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.569 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testSelectAllAndAliasWithIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 559.383 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testDeleteFromAllPKColumnIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.881 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testMultipleUpdatesAcrossRegions[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.504 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testSelectDistinctOnTableWithSecondaryImmutableIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.837 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testCreateIndexAfterUpsertStarted[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.711 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testCreateIndexAfterUpsertStartedTxnl[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 540.179 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testInClauseWithIndexOnColumnOfUsignedIntType[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.883 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testDeleteFromNonPKColumnIndex[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.848 s  <<< FAILURE!

java.lang.AssertionError: expected:<3> but was:<0>



[ERROR] testGroupByCount[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 8.831 s  <<< FAILURE!

java.lang.AssertionError



[ERROR] testSelectCF[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 559.586 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testUpsertAfterIndexDrop[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 539.841 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,



[ERROR] testReturnedTimestamp[LocalMutableTxIndexIT_localIndex=true,mutable=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.LocalMutableTxIndexIT)  Time elapsed: 540.084 s  <<< ERROR!

org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: 10.0.0.3,52170,1527192655864,{code}

> Integrate Omid with Phoenix
> ---------------------------
>
>                 Key: PHOENIX-3623
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3623
>             Project: Phoenix
>          Issue Type: New Feature
>            Reporter: Ohad Shacham
>            Assignee: Ohad Shacham
>            Priority: Major
>
> The purpose of this Jira is to propose a work plan for connecting Omid to Phoenix.
> Each task of the following will be handled in a seperate sub Jira. Subtasks 4.* are related to augmenting Omid to support features required by Phoenix and therefore, their corresponding Jiras will appear under Omid and not under Phoenix. 
> Each task is completed by a commit.
> Task 1: Adding transaction abstraction layer (TAL) - Currently Tephra calls are integrated inside Phoenix code. Therefore, in order to support both Omid and Tephra, we need to add another abstraction layer that later-on will be connected to both Tephra and Omid. The first tasks is to define such an interface.
> Task 2: Implement TAL functionality for Tephra. 
> Task 3: Refactor Phoenix to use TAL instead of direct calls to Tephra.
> Task 4: Implement Omid required features for Phoenix:
> Task 4.1: Add checkpoints to Omid. A checkpoint is a point in a transaction where every write occurs after the checkpoint is not visible by the transaction. Explanations for this feature can be seen in [TEPHRA-96].
> Task 4.2: Add an option to mark a key as non-conflicting. The motivation is to reduce the size of the write set needed by the transaction manager upon commit as well as reduce the conflict detection work.
> Task 4.3: Add support for transactions that never abort. Such transactions will only make other inflight transactions abort and will abort only in case of a transaction manager failure. 
> These transactions are needed for ‘create index’ and the scenario was discussed in [TEPHRA-157] and [PHOENIX-2478]. Augmenting Omid with this kind of transactions was also discussed in [OMID-56].
> Task 4.4: Add support for returning multiple versions in a scan. The use case is described in [TEPHRA-134].
> Task 4.5: Change Omid's timestamp mechanism to return real time based timestamp, while keeping monotonicity.
> Task 5: Implement TAL functionality for Omid.
> Task 6: Implement performance tests and tune Omid for Phoenix use. This task requires understanding of common usage scenarios in Phoenix as well as defining the tradeoff between throughput and latency. 
> Could you please review the proposed work plan?
> Also, could you please let me know whether I missed any augmentation needed for Omid in order to support Phoenix operations?
> I opened a jira [OMID-82] that encapsulates all Omid related development for Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)