You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2023/01/11 17:54:00 UTC

[jira] [Work logged] (HIVE-26935) Expose root cause of MetaException to client sides

     [ https://issues.apache.org/jira/browse/HIVE-26935?focusedWorklogId=838718&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-838718 ]

ASF GitHub Bot logged work on HIVE-26935:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Jan/23 17:53
            Start Date: 11/Jan/23 17:53
    Worklog Time Spent: 10m 
      Work Description: wecharyu opened a new pull request, #3938:
URL: https://github.com/apache/hive/pull/3938

   ### What changes were proposed in this pull request?
   
   We are trying to expose the root cause of MetaException in the message:
   1. Refactor the MetaException message in RetryingHMSHandler with following format:
   ```sh
   MetaException(message:One or more instances could not be deleted
   Root cause: java.sql.SQLIntegrityConstraintViolationException: Cannot delete or update a parent row)
   ```
   2. Check if the exception can be retry in HMS server.
   
   
   ### Why are the changes needed?
   1. Expose root cause for user troubleshooting.
   2. Root cause in message can help skip some unnecessary retry in both client and server sides.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   Add Unit test.
   




Issue Time Tracking
-------------------

            Worklog Id:     (was: 838718)
    Remaining Estimate: 0h
            Time Spent: 10m

> Expose root cause of MetaException to client sides
> --------------------------------------------------
>
>                 Key: HIVE-26935
>                 URL: https://issues.apache.org/jira/browse/HIVE-26935
>             Project: Hive
>          Issue Type: Improvement
>          Components: Hive
>    Affects Versions: 4.0.0-alpha-2
>            Reporter: Wechar
>            Assignee: Wechar
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> MetaException is generated by thrift, and only {{message}} filed will be transport to client, we should expose the root cause in message to the clients with following advantages:
>  * More friendly for user troubleshooting
>  * Some root cause is unrecoverable, exposing it can disable the unnecessary retry.
> *How to Reproduce:*
>  - Step 1: Disable direct sql for HMS for our test case.
>  - Step 2: Add an illegal {{PART_COL_STATS}} for a partition,
>  - Step 3: Try to {{drop table}} with Spark.
> The exception in Hive metastore is:
> {code:sh}
> 2023-01-11T17:13:51,259 ERROR [Metastore-Handler-Pool: Thread-39]: metastore.ObjectStore (ObjectStore.java:run(4369)) - 
> javax.jdo.JDOUserException: One or more instances could not be deleted
>         at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:625) ~[datanucleus-api-jdo-5.2.8.jar:?]
>         at org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:530) ~[datanucleus-api-jdo-5.2.8.jar:?]
>         at org.datanucleus.api.jdo.JDOQuery.deletePersistentAll(JDOQuery.java:499) ~[datanucleus-api-jdo-5.2.8.jar:?]
>         at org.apache.hadoop.hive.metastore.QueryWrapper.deletePersistentAll(QueryWrapper.java:108) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsNoTxn(ObjectStore.java:4207) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore.access$1000(ObjectStore.java:285) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore$7.run(ObjectStore.java:3086) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.Batchable.runBatched(Batchable.java:74) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsViaJdo(ObjectStore.java:3074) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore.access$400(ObjectStore.java:285) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3058) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:3050) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:4362) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionsInternal(ObjectStore.java:3061) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.ObjectStore.dropPartitions(ObjectStore.java:3040) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_332]
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_332]
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_332]
>         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_332]
>         at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at com.sun.proxy.$Proxy24.dropPartitions(Unknown Source) ~[?:?]
>         at org.apache.hadoop.hive.metastore.HMSHandler.dropPartitionsAndGetLocations(HMSHandler.java:3186) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.HMSHandler.drop_table_core(HMSHandler.java:2963) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.HMSHandler.drop_table_with_environment_context(HMSHandler.java:3211) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.HMSHandler.drop_table_with_environment_context(HMSHandler.java:3199) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_332]
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_332]
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_332]
>         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_332]
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:146) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at com.sun.proxy.$Proxy32.drop_table_with_environment_context(Unknown Source) ~[?:?]
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:19668) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:19647) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_332]
>         at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_332]
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) ~[hadoop-common-3.3.2.jar:?]
>         at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_332]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_332]
>         at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_332]
> Caused by: org.datanucleus.exceptions.NucleusDataStoreException: Clear request failed : DELETE FROM `PARTITION_PARAMS` WHERE `PART_ID`=?
>         at org.datanucleus.store.rdbms.scostore.JoinMapStore.clearInternal(JoinMapStore.java:916) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.scostore.JoinMapStore.clear(JoinMapStore.java:447) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.types.wrappers.backed.Map.clear(Map.java:630) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.mapping.java.MapMapping.preDelete(MapMapping.java:298) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.request.DeleteRequest.execute(DeleteRequest.java:208) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObjectFromTable(RDBMSPersistenceHandler.java:496) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObject(RDBMSPersistenceHandler.java:468) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.state.StateManagerImpl.internalDeletePersistent(StateManagerImpl.java:1213) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.state.StateManagerImpl.deletePersistent(StateManagerImpl.java:5496) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjectInternal(ExecutionContextImpl.java:2336) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjectWork(ExecutionContextImpl.java:2258) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjects(ExecutionContextImpl.java:2150) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextThreadedImpl.deleteObjects(ExecutionContextThreadedImpl.java:264) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.Query.performDeletePersistentAll(Query.java:2264) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.AbstractJavaQuery.performDeletePersistentAll(AbstractJavaQuery.java:114) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.Query.deletePersistentAll(Query.java:2216) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:512) ~[datanucleus-api-jdo-5.2.8.jar:?]
>         ... 43 more
> Caused by: java.sql.BatchUpdateException: Cannot delete or update a parent row: a foreign key constraint fails ("hive"."PART_COL_STATS", CONSTRAINT "PART_COL_STATS_FK" FOREIGN KEY ("PART_ID") REFERENCES "PARTITIONS" ("PART_ID"))
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_332]
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_332]
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_332]
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_332]
>         at com.mysql.cj.util.Util.handleNewInstance(Util.java:192) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.util.Util.getInstance(Util.java:167) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.util.Util.getInstance(Util.java:174) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.exceptions.SQLError.createBatchUpdateException(SQLError.java:224) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchSerially(ClientPreparedStatement.java:853) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:435) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:795) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at org.apache.hive.com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:125) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hive.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:675) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.SQLController.getStatementForUpdate(SQLController.java:208) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.SQLController.getStatementForUpdate(SQLController.java:179) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.scostore.JoinMapStore.clearInternal(JoinMapStore.java:897) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.scostore.JoinMapStore.clear(JoinMapStore.java:447) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.types.wrappers.backed.Map.clear(Map.java:630) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.mapping.java.MapMapping.preDelete(MapMapping.java:298) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.request.DeleteRequest.execute(DeleteRequest.java:208) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObjectFromTable(RDBMSPersistenceHandler.java:496) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObject(RDBMSPersistenceHandler.java:468) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.state.StateManagerImpl.internalDeletePersistent(StateManagerImpl.java:1213) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.state.StateManagerImpl.deletePersistent(StateManagerImpl.java:5496) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjectInternal(ExecutionContextImpl.java:2336) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjectWork(ExecutionContextImpl.java:2258) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjects(ExecutionContextImpl.java:2150) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextThreadedImpl.deleteObjects(ExecutionContextThreadedImpl.java:264) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.Query.performDeletePersistentAll(Query.java:2264) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.AbstractJavaQuery.performDeletePersistentAll(AbstractJavaQuery.java:114) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.Query.deletePersistentAll(Query.java:2216) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:512) ~[datanucleus-api-jdo-5.2.8.jar:?]
>         ... 43 more
> Caused by: java.sql.SQLIntegrityConstraintViolationException: Cannot delete or update a parent row: a foreign key constraint fails ("hive"."PART_COL_STATS", CONSTRAINT "PART_COL_STATS_FK" FOREIGN KEY ("PART_ID") REFERENCES "PARTITIONS" ("PART_ID"))
>         at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:117) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1098) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchSerially(ClientPreparedStatement.java:832) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.ClientPreparedStatement.executeBatchInternal(ClientPreparedStatement.java:435) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at com.mysql.cj.jdbc.StatementImpl.executeBatch(StatementImpl.java:795) ~[mysql-connector-java-8.0.28.jar:8.0.28]
>         at org.apache.hive.com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:125) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.apache.hive.com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>         at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:675) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.SQLController.getStatementForUpdate(SQLController.java:208) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.SQLController.getStatementForUpdate(SQLController.java:179) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.scostore.JoinMapStore.clearInternal(JoinMapStore.java:897) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.scostore.JoinMapStore.clear(JoinMapStore.java:447) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.types.wrappers.backed.Map.clear(Map.java:630) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.mapping.java.MapMapping.preDelete(MapMapping.java:298) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.request.DeleteRequest.execute(DeleteRequest.java:208) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObjectFromTable(RDBMSPersistenceHandler.java:496) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObject(RDBMSPersistenceHandler.java:468) ~[datanucleus-rdbms-5.2.10.jar:?]
>         at org.datanucleus.state.StateManagerImpl.internalDeletePersistent(StateManagerImpl.java:1213) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.state.StateManagerImpl.deletePersistent(StateManagerImpl.java:5496) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjectInternal(ExecutionContextImpl.java:2336) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjectWork(ExecutionContextImpl.java:2258) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextImpl.deleteObjects(ExecutionContextImpl.java:2150) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.ExecutionContextThreadedImpl.deleteObjects(ExecutionContextThreadedImpl.java:264) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.Query.performDeletePersistentAll(Query.java:2264) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.AbstractJavaQuery.performDeletePersistentAll(AbstractJavaQuery.java:114) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.store.query.Query.deletePersistentAll(Query.java:2216) ~[datanucleus-core-5.2.10.jar:?]
>         at org.datanucleus.api.jdo.JDOQuery.deletePersistentInternal(JDOQuery.java:512) ~[datanucleus-api-jdo-5.2.8.jar:?]
>         ... 43 more
> {code}
> The exception from Spark client is:
> {code:sh}
> spark-sql> drop table test_tbl;
> Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:One or more instances could not be deleted)
> org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:One or more instances could not be deleted)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:110)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.dropTable(HiveExternalCatalog.scala:523)
>         at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.dropTable(ExternalCatalogWithListener.scala:104)
>         at org.apache.spark.sql.catalyst.catalog.SessionCatalog.dropTable(SessionCatalog.scala:782)
>         at org.apache.spark.sql.execution.command.DropTableCommand.run(ddl.scala:243)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
>         at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
>         at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
>         at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
>         at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
>         at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
>         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
>         at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
>         at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
>         at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
>         at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
>         at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
>         at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
>         at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
>         at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
>         at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
>         at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
>         at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
>         at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
>         at org.apache.spark.sql.Dataset.<init>(Dataset.scala:220)
>         at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
>         at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
>         at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
>         at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
>         at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
>         at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
>         at scala.collection.Iterator.foreach(Iterator.scala:943)
>         at scala.collection.Iterator.foreach$(Iterator.scala:943)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>         at scala.collection.IterableLike.foreach(IterableLike.scala:74)
>         at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286)
>         at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>         at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
>         at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
>         at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
>         at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:One or more instances could not be deleted)
>         at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1207)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.spark.sql.hive.client.Shim_v0_14.dropTable(HiveShim.scala:1326)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$dropTable$1(HiveClientImpl.scala:573)
>         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:298)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:229)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:228)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:278)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.dropTable(HiveClientImpl.scala:573)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$dropTable$1(HiveExternalCatalog.scala:525)
>         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:101)
>         ... 60 more
> Caused by: MetaException(message:One or more instances could not be deleted)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_table_with_environment_context_result$drop_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:48279)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_table_with_environment_context_result$drop_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:48256)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_table_with_environment_context_result.read(ThriftHiveMetastore.java:48198)
>         at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_drop_table_with_environment_context(ThriftHiveMetastore.java:1378)
>         at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.drop_table_with_environment_context(ThriftHiveMetastore.java:1362)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:2402)
>         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.drop_table_with_environment_context(SessionHiveMetaStoreClient.java:114)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1093)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1029)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
>         at com.sun.proxy.$Proxy41.dropTable(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2327)
>         at com.sun.proxy.$Proxy41.dropTable(Unknown Source)
>         at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1201)
>         ... 75 more
> {code}
> Obviously we lost the root cause {{java.sql.SQLIntegrityConstraintViolationException}} in client



--
This message was sent by Atlassian Jira
(v8.20.10#820010)