You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Thomas D'Silva (JIRA)" <ji...@apache.org> on 2016/08/24 07:16:21 UTC

[jira] [Updated] (PHOENIX-3203) Upserting rows to a table with a mutable index using a tenant specific connection fails

     [ https://issues.apache.org/jira/browse/PHOENIX-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Thomas D'Silva updated PHOENIX-3203:
------------------------------------
    Description: 
In ServerCachingEndpointImpl.addServerCache we look up the tenant cache using an ImmutableBytesPtr instead of  ImmutableBytesWritable

{code}
ImmutableBytesPtr tenantId = null;
    if (request.hasTenantId()) {
      tenantId = new ImmutableBytesPtr(request.getTenantId().toByteArray());
    }
    TenantCache tenantCache = GlobalCache.getTenantCache(this.env, tenantId);
{code}

  was:
With the following exception

org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.  key=-2923123348037284635 region=T_1466804698521,,1466804698532.3d1ab071438dad421af3f78e8af3530d. Index update failed
	at org.apache.phoenix.execute.MutationState.send(MutationState.java:984)
	at org.apache.phoenix.execute.MutationState.send(MutationState.java:1317)
	at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1149)
	at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:524)
	at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:1)
	at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
	at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:521)
	at org.apache.phoenix.end2end.index.MutableIndexIT.testTenantSpecificConnection(MutableIndexIT.java:671)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.junit.runners.Suite.runChild(Suite.java:128)
	at org.junit.runners.Suite.runChild(Suite.java:27)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.  key=-2923123348037284635 region=T_1466804698521,,1466804698532.3d1ab071438dad421af3f78e8af3530d. Index update failed
	at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:452)
	at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
	at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129)
	at org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
	at org.apache.phoenix.execute.MutationState.send(MutationState.java:963)
	... 43 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 4 actions: org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.  key=-2923123348037284635 region=T_1466804698521,,1466804698532.3d1ab071438dad421af3f78e8af3530d. Index update failed
	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:79)
	at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:53)
	at org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:81)
	at org.apache.phoenix.index.PhoenixIndexMetaData.<init>(PhoenixIndexMetaData.java:89)
	at org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:53)
	at org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:121)
	at org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
	at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:202)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1013)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1656)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1733)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1688)
	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1009)
	at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2570)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2350)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2305)
	at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2309)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4613)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3780)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3667)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31198)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.  key=-2923123348037284635 region=T_1466804698521,,1466804698532.3d1ab071438dad421af3f78e8af3530d.
	at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:452)
	at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
	at org.apache.phoenix.index.PhoenixIndexMetaData.getIndexMetaData(PhoenixIndexMetaData.java:80)
	... 23 more
: 4 times, 
	at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:205)
	at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:189)
	at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:1042)
	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2478)
	at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:888)
	at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:903)
	at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:878)
	at org.apache.phoenix.execute.MutationState.send(MutationState.java:951)
	... 43 more



To repro: 

{code}

@Test
    public void testTenantSpecificConnection() throws Exception {
        String tableName = TestUtil.DEFAULT_DATA_TABLE_NAME + "_" + System.currentTimeMillis();
        String fullTableName = SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
        Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
	        conn.setAutoCommit(false);
	        // create data table
	        conn.createStatement().execute(
	            "CREATE TABLE IF NOT EXISTS " + fullTableName + 
		        "(TENANT_ID CHAR(15) NOT NULL,"+
		        "TYPE VARCHAR(25) NOT NULL,"+
		        "ENTITY_ID CHAR(15) NOT NULL,"+
				"CONSTRAINT PK_CONSTRAINT PRIMARY KEY (TENANT_ID, TYPE, ENTITY_ID)) MULTI_TENANT=TRUE "
		        + (!tableDDLOptions.isEmpty() ? "," + tableDDLOptions : "") );
	        // create index
	        conn.createStatement().execute("CREATE INDEX IF NOT EXISTS IDX ON " + fullTableName + " (ENTITY_ID, TYPE)");
	        
	        // upsert rows
	        String dml = "UPSERT INTO " + fullTableName + " (ENTITY_ID, TYPE) VALUES ( ?, ?)";
	        props.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, "tenant1");
	        // connection is tenant-specific
	        try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
		        for (int i=0; i<4; ++i) {
		        	PreparedStatement stmt = tenantConn.prepareStatement(dml);
		    		stmt.setString(1, "00000000000000" + String.valueOf(i));
		    		stmt.setString(2, String.valueOf(i));
		    		assertEquals(1,stmt.executeUpdate());
		        }
		        tenantConn.commit();
	        }
        }
    }

{code}


> Upserting rows to a table with a mutable index using a tenant specific connection fails
> ---------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-3203
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3203
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>            Reporter: Thomas D'Silva
>            Assignee: Thomas D'Silva
>             Fix For: 4.8.0
>
>
> In ServerCachingEndpointImpl.addServerCache we look up the tenant cache using an ImmutableBytesPtr instead of  ImmutableBytesWritable
> {code}
> ImmutableBytesPtr tenantId = null;
>     if (request.hasTenantId()) {
>       tenantId = new ImmutableBytesPtr(request.getTenantId().toByteArray());
>     }
>     TenantCache tenantCache = GlobalCache.getTenantCache(this.env, tenantId);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)