You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Geoffrey Jacoby (Jira)" <ji...@apache.org> on 2022/06/21 19:15:00 UTC

[jira] [Resolved] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

     [ https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Geoffrey Jacoby resolved PHOENIX-6725.
--------------------------------------
    Resolution: Fixed

Merged to master. Thanks [~lokiore]

> ConcurrentMutationException when adding column to table/view
> ------------------------------------------------------------
>
>                 Key: PHOENIX-6725
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6725
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>            Reporter: Tanuj Khurana
>            Assignee: Lokesh Khurana
>            Priority: Major
>
> I have a single threaded workflow but occasionally I hit ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] client.ConnectionManager$HConnectionImplementation: Checking master connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: java.io.IOException: Operation timed out
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)	at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to xxxxx failed on local exception: java.io.IOException: Operation timed out	
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)	
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)	
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)	
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)	
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)	
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)	
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)	
> at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)	
> at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)	
> at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:343)	
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Operation timed out	
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)	
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)	
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)	
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)	
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)	
> at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)	
> at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)	
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)	
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)	
> at java.io.FilterInputStream.read(FilterInputStream.java:133)	
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)	
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265)	
> at java.io.DataInputStream.readInt(DataInputStream.java:387)	
> at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:583)	... 2 more
> 2022-05-04 16:44:14,422 ERROR [main] query.ConnectionQueryServicesImpl: 46146@xxxx failed to acquire mutex for  tenantId : null schemaName : READONLYDB tableName : READ_ONLY_AUTH_SESSION columnName : null familyName : null
> org.apache.phoenix.schema.ConcurrentTableMutationException: ERROR 301 (23000): Concurrent modification to table. tableName=READONLYDB.READ_ONLY_AUTH_SESSION	
> at org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4131)	
> at org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)	
> at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)	
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)	
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)	
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)	
> at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)	
> at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)	
> at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)