You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/06/09 16:35:01 UTC

[jira] [Work logged] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently

     [ https://issues.apache.org/jira/browse/HIVE-16839?focusedWorklogId=443130&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-443130 ]

ASF GitHub Bot logged work on HIVE-16839:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Jun/20 16:34
            Start Date: 09/Jun/20 16:34
    Worklog Time Spent: 10m 
      Work Description: github-actions[bot] commented on pull request #484:
URL: https://github.com/apache/hive/pull/484#issuecomment-641144228


   This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the dev@hive.apache.org list if the patch is in need of reviews.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 443130)
    Remaining Estimate: 0h
            Time Spent: 10m

> Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
> ------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-16839
>                 URL: https://issues.apache.org/jira/browse/HIVE-16839
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.13.1, 1.1.0, 3.0.0, 2.3.4
>            Reporter: Nemon Lou
>            Assignee: Guang Yang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0, 3.2.0
>
>         Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, HIVE-16839.03.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> SQL to reproduce:
> prepare:
> {noformat}
>  hdfs dfs -mkdir -p /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627
>  1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp string,ld string);
> {noformat}
> open one beeline run these two sql many times 
> {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION (cp=2017060513,ld=2017060610);
>  3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET LOCATION 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627';
> {noformat}
> open another beeline to run this sql many times at the same time.
> {noformat}
>  4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610);
> {noformat}
> MetaStore logs:
> {noformat}
> 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDOObjectNotFoundException: No such database row
> FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor
> 	at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475)
> 	at org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158)
> 	at org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231)
> 	at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java)
> 	at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586)
> 	at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
> 	at com.sun.proxy.$Proxy0.getPartition(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317)
> 	at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
> 	at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963)
> 	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947)
> 	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
> 	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> NestedThrowablesStackTrace:
> No such database row
> org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database row
> 	at org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:357)
> 	at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:324)
> 	at org.datanucleus.state.AbstractStateManager.loadFieldsFromDatastore(AbstractStateManager.java:1120)
> 	at org.datanucleus.state.JDOStateManager.loadSpecifiedFields(JDOStateManager.java:2916)
> 	at org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3219)
> 	at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java)
> 	at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586)
> 	at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
> 	at com.sun.proxy.$Proxy0.getPartition(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317)
> 	at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
> 	at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963)
> 	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947)
> 	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
> 	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
>  | org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:155)
> 2017-06-06 21:58:36,232 | ERROR | pool-6-thread-197 | InvalidOperationException(message:alter is not possible)
> 	at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:563)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317)
> 	at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
> 	at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963)
> 	at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947)
> 	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
> 	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
>  | org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:141)
> 2017-06-06 21:58:36,251 | INFO  | pool-6-thread-197 | 39: Shutting down the object store... | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:794)
> 2017-06-06 21:58:36,252 | ERROR | pool-6-thread-197 | Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDOUserException: Transaction is still active. You should always close your transactions correctly using commit() or rollback().
> FailedObject:org.datanucleus.api.jdo.JDOPersistenceManager@215e7d59
> 	at org.datanucleus.api.jdo.JDOPersistenceManager.internalClose(JDOPersistenceManager.java:177)
> 	at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.releasePersistenceManager(JDOPersistenceManagerFactory.java:1221)
> 	at org.datanucleus.api.jdo.JDOPersistenceManager.close(JDOPersistenceManager.java:366)
> 	at org.apache.hadoop.hive.metastore.ObjectStore.shutdown(ObjectStore.java:427)
> 	at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
> 	at com.sun.proxy.$Proxy0.shutdown(Unknown Source)
> 	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.shutdown(HiveMetaStore.java:866)
> 	at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
> 	at com.sun.proxy.$Proxy12.shutdown(Unknown Source)
> 	at com.facebook.fb303.FacebookService$Processor$shutdown.getResult(FacebookService.java:1131)
> 	at com.facebook.fb303.FacebookService$Processor$shutdown.getResult(FacebookService.java:1117)
> 	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
> 	at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
> 	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
>  | org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:155)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)