You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sandeep Katta (Jira)" <ji...@apache.org> on 2019/09/22 04:07:00 UTC

[jira] [Resolved] (SPARK-29180) drop database throws Exception

     [ https://issues.apache.org/jira/browse/SPARK-29180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sandeep Katta resolved SPARK-29180.
-----------------------------------
    Resolution: Invalid

User requried to run hive-txn-schema-2.3.0.mysql.sql and hive-txn-schema-2.3.0.mysql.sql  after upgrading to Hive 2.3.6

> drop database throws Exception 
> -------------------------------
>
>                 Key: SPARK-29180
>                 URL: https://issues.apache.org/jira/browse/SPARK-29180
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: ABHISHEK KUMAR GUPTA
>            Priority: Minor
>         Attachments: DROP_DATABASE_Exception.png, image-2019-09-22-09-32-57-532.png
>
>
> drop database throwing Exception but result is success.
>  
> : jdbc:hive2://10.18.19.208:23040/default> show databases;
> +-----------------+
> |  databaseName   |
> +-----------------+
> | db1             |
> | db2             |
> | default         |
> | func            |
> | gloablelimit    |
> | jointesthll     |
> | sparkdb__       |
> | temp_func_test  |
> | test1           |
> +-----------------+
> 9 rows selected (0.131 seconds)
> 0: jdbc:hive2://10.18.19.208:23040/default> drop database test1 cascade;
> *Error: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Unable to clean up com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'sparksql.TXN_COMPONENTS' doesn' exist*
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>         at com.mysql.jdbc.Util.handleNewInstance(Util.java:408)
>         at com.mysql.jdbc.Util.getInstance(Util.java:383)
>         at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1062)
>         at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4208)
>         at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4140)
>         at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2597)
>         at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2758)
>         at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2820)
>         at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1759)
>         at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1679)
>         at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
>         at org.apache.hadoop.hive.metastore.txn.TxnHandler.cleanupRecords(TxnHandler.java:1888)
>         at org.apache.hadoop.hive.metastore.AcidEventListener.onDropDatabase(AcidEventListener.java:51)
>         at org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier$13.notify(MetaStoreListenerNotifier.java:69)
>         at org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.notifyEvent(MetaStoreListenerNotifier.java:167)
>         at org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.notifyEvent(MetaStoreListenerNotifier.java:197)
>         at org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.notifyEvent(MetaStoreListenerNotifier.java:235)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_database_core(HiveMetaStore.java:1139)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_database(HiveMetaStore.java:1175)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
>         at com.sun.proxy.$Proxy31.drop_database(Unknown Source)
>         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:868)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
>         at com.sun.proxy.$Proxy32.dropDatabase(Unknown Source)
>         at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:484)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$dropDatabase$1(HiveClientImpl.scala:361)
>         at org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$1572/262756737.apply$mcV$sp(Unknown Source)
>         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:311)
>         at org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$955/98415362.apply(Unknown Source)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:245)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:244)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:294)
>         at org.apache.spark.sql.hive.client.HiveClientImpl.dropDatabase(HiveClientImpl.scala:361)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$dropDatabase$1(HiveExternalCatalog.scala:197)
>         at org.apache.spark.sql.hive.HiveExternalCatalog$$Lambda$1571/1997332307.apply$mcV$sp(Unknown Source)
>         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
>         at org.apache.spark.sql.hive.HiveExternalCatalog.dropDatabase(HiveExternalCatalog.scala:197)
>         at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.dropDatabase(ExternalCatalogWithListener.scala:53)
>         at org.apache.spark.sql.catalyst.catalog.SessionCatalog.dropDatabase(SessionCatalog.scala:227)
>         at org.apache.spark.sql.execution.command.DropDatabaseCommand.run(ddl.scala:107)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>         at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
>         at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:225)
>         at org.apache.spark.sql.Dataset$$Lambda$1346/1296913900.apply(Unknown Source)
>         at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3443)
>         at org.apache.spark.sql.Dataset$$Lambda$1347/307195511.apply(Unknown Source)
>         at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(SQLExecution.scala:100)
>         at org.apache.spark.sql.execution.SQLExecution$$$Lambda$1354/1380756700.apply(Unknown Source)
>         at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
>         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLEx
>         at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3439)
>         at org.apache.spark.sql.Dataset.<init>(Dataset.scala:225)
>         at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:95)
>         at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:647)
>         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:676)
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
>         at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
>         at java.lang.Thread.run(Thread.java:745)
> ); (state=,code=0)
> 0: jdbc:hive2://10.18.19.208:23040/default> show databases;
> +-----------------+
> |  databaseName   |
> +-----------------+
> | db1             |
> | db2             |
> | default         |
> | func            |
> | gloablelimit    |
> | jointesthll     |
> | sparkdb__       |
> | temp_func_test  |
> +-----------------+
> 8 rows selected (0.047 seconds)
> 0: jdbc:hive2://10.18.19.208:23040/default>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org