You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kyuubi.apache.org by GitBox <gi...@apache.org> on 2022/03/03 14:21:58 UTC

[GitHub] [incubator-kyuubi] TangYan-1 opened a new issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

TangYan-1 opened a new issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005


   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the [issues](https://github.com/apache/incubator-kyuubi/issues?q=is%3Aissue) and found no similar issues.
   
   
   ### Describe the bug
   
   `set spark.sql.catalog.spark_catalog = org.apache.iceberg.spark.SparkSessionCatalog;
   set spark.sql.catalog.spark_catalog.type = hive;
   set spark.sql.catalog.spark_catalog.uri = thrift://hivemestore_host:9083;
   create table spark_catalog.default.testtable(key int) using iceberg;`
   
   The above queries can succeed in spark3 shell job, but it got the below exception in kyuubi beeline. 
   `Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Version information not found in metastore.
           at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_322]
           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_322]
           at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
           at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at com.sun.proxy.$Proxy43.verifySchema(Unknown Source) ~[?:?]
           at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:655) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:648) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:717) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:420) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7036) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:254) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_322]
           at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_322]
           at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_322]
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_322]
           at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_322]
           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_322]
           at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
           at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
           at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
           at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
           at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]`
   
   ### Affects Version(s)
   
   1.4.1-incubating
   
   ### Kyuubi Server Log Output
   
   ```logtalk
   14:14:42.253 [SparkSQLSessionManager-exec-pool: Thread-93] ERROR org.apache.kyuubi.engine.spark.operation.ExecuteStatement - Error operating EXECUTE_STATEMENT: org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:72)
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:35)
   	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
   	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
   	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
   	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:76)
   	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:188)
   	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:95)
   	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:78)
   	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:42)
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2344)
   	at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2342)
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2325)
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
   	at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:161)
   	at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:488)
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:135)
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:92)
   	at org.apache.iceberg.spark.SparkSessionCatalog.loadTable(SparkSessionCatalog.java:118)
   	at org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:119)
   	at org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:40)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:40)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:46)
   	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228)
   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
   	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610)
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:100)
   	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.withLocalProperties(ExecuteStatement.scala:159)
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:94)
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:127)
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:750)
   Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
   	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1775)
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101)
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65)
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77)
   	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196)
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55)
   	... 50 more
   Caused by: java.lang.reflect.InvocationTargetException
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
   	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773)
   	... 62 more
   Caused by: MetaException(message:Version information not found in metastore. )
   	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066)
   	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
   	at com.sun.proxy.$Proxy43.verifySchema(Unknown Source)
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:655)
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:648)
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:717)
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:420)
   	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
   	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
   	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7036)
   	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:254)
   	... 67 more
   
   org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:72) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:35) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:76) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:188) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:95) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:78) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:42) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2344) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) ~[?:1.8.0_322]
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2342) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2325) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:161) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:488) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:135) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:92) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.spark.SparkSessionCatalog.loadTable(SparkSessionCatalog.java:118) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:119) ~[spark-catalyst_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:39) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:40) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:40) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:46) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) ~[spark-sql_2.12-3.1.1.jar:3.1.1]
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:100) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) [scala-library-2.12.10.jar:?]
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.withLocalProperties(ExecuteStatement.scala:159) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:94) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:127) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_322]
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_322]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322]
   	at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
   	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1775) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_322]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_322]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	... 50 more
   Caused by: java.lang.reflect.InvocationTargetException
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_322]
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_322]
   	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_322]
   	at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_322]
   	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_322]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_322]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	... 50 more
   Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Version information not found in metastore. 
   	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:8066) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:8043) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_322]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_322]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
   	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at com.sun.proxy.$Proxy43.verifySchema(Unknown Source) ~[?:?]
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:655) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:648) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:717) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:420) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:7036) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:254) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_322]
   	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_322]
   	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_322]
   	at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_322]
   	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1773) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:101) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:94) ~[hive-metastore-2.1.1-cdh6.3.0.jar:2.1.1-cdh6.3.0]
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_322]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_322]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:65) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:77) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:196) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:55) ~[iceberg-spark-runtime-3.1_2.12-0.13.1.jar:?]
   	... 50 more
   14:16:10.565 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing operation: 
   org.apache.kyuubi.KyuubiSQLException: Invalid OperationHandle [type=EXECUTE_STATEMENT, identifier: d2569705-5367-4533-851e-19ee9f920dfa]
   	at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.operation.OperationManager.getOperation(OperationManager.scala:81) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.AbstractBackendService.closeOperation(AbstractBackendService.scala:148) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseOperation(ThriftBinaryFrontendService.scala:450) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1797) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1782) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310) [libthrift-0.12.0.jar:0.12.0]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322]
   	at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   14:16:10.575 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing operation: 
   org.apache.kyuubi.KyuubiSQLException: Invalid OperationHandle [type=EXECUTE_STATEMENT, identifier: 6f202cfd-ba4b-4b74-add8-39724d4a4041]
   	at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.operation.OperationManager.getOperation(OperationManager.scala:81) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.AbstractBackendService.closeOperation(AbstractBackendService.scala:148) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseOperation(ThriftBinaryFrontendService.scala:450) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1797) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1782) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310) [libthrift-0.12.0.jar:0.12.0]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322]
   	at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   14:16:10.577 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing operation: 
   org.apache.kyuubi.KyuubiSQLException: Invalid OperationHandle [type=EXECUTE_STATEMENT, identifier: b35a382f-82a7-4cea-9187-b675cfebcae7]
   	at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.operation.OperationManager.getOperation(OperationManager.scala:81) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.AbstractBackendService.closeOperation(AbstractBackendService.scala:148) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseOperation(ThriftBinaryFrontendService.scala:450) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1797) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseOperation.getResult(TCLIService.java:1782) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310) [libthrift-0.12.0.jar:0.12.0]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322]
   	at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   14:18:10.608 [SparkThriftBinaryFrontendServiceHandler-Pool: Thread-85] ERROR org.apache.kyuubi.engine.spark.SparkThriftBinaryFrontendService - Error closing session: 
   org.apache.kyuubi.KyuubiSQLException: Invalid SessionHandle [76dafc6d-0bbd-4d80-a49a-76ffff5c9ce9]
   	at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.session.SessionManager.closeSession(SessionManager.scala:90) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.engine.spark.session.SparkSQLSessionManager.closeSession(SparkSQLSessionManager.scala:99) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.AbstractBackendService.closeSession(AbstractBackendService.scala:49) ~[kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.kyuubi.service.ThriftBinaryFrontendService.CloseSession(ThriftBinaryFrontendService.scala:221) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseSession.getResult(TCLIService.java:1517) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$CloseSession.getResult(TCLIService.java:1502) [hive-service-rpc-3.1.2.jar:3.1.2]
   	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [libthrift-0.12.0.jar:0.12.0]
   	at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36) [kyuubi-spark-sql-engine_2.12-1.4.1-incubating.jar:1.4.1-incubating]
   	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310) [libthrift-0.12.0.jar:0.12.0]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322]
   	at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
   ```
   
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   _No response_
   
   ### Kyuubi Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058118121


   > when I use another string as the catalog name, not the built-in spark_catalog, the query can succeed.
   
   Please add `kyuubi.engine.single.spark.session=ture` in your `kyuubi-defaults.conf`, then restart kyuubi, and see if it works.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 edited a comment on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 edited a comment on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058115302


   From the log, I think you added the following configurations in your `kyuubi-defaults.conf` or `spark-defauls.conf`
   ```
   spark.sql.hive.metastore.version=2.1.1
   spark.sql.hive.metastore.jars=/opt/cloudera/parcels/CDH/lib/hive/lib/*
   ```
   it indicates that spark use the hive 2.1.1-cdh6 client jars to communicate to the HMS, which generally reasonable, but you can try to remove these 2 confs, then spark will use the build-in hive 2.3 jars to do that, iceberg should work with it. 
   
   PS: The hive metastore client communicate to HMS through thrift protocol, I don't know if 2.3 client compatible with 2.1 server(and 2.1.1-cdh has some difference with the apache version), please be careful to try it in your production environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058104602


   Did you put any spark/hive configurations in `$KYUUBI_HOME/conf/kyuubi-defaults.conf`?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] github-actions[bot] commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058091468


   Hello @TangYan-1,
   Thanks for finding the time to report the issue!
   We really appreciate the community's efforts to improve Apache Kyuubi (Incubating).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] yaooqinn commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
yaooqinn commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058113349


   try pre-set hive.metastore.schema.verification to false or use same hive client version against Hive metastore server


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058164875


   I suspect that in `spark-shell` iceberg hive client use the build-in hive 2.3 jars, asked you to run some hive table queries **BEFORE** your `set ...` and `create ...` iceberg table statements is to check if create spark hive client in isolate classloader affects iceberg's hive client creation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058120148


   In `spark-shell` `spark-sql`, try query other tables before creating iceberg table, and see what happens.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] TangYan-1 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
TangYan-1 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058682066


   > In `spark-shell` or `spark-sql`, try query other tables before creating iceberg table, and see what happens.
   
   I tried, query other tables can work well. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] TangYan-1 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
TangYan-1 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058123380


   I think it's not due session conflict since it's my own testing env. I know spark_catalog is the spark sql default catalog,  the issue is if using spark sql default catalog to create iceberg table in kyuubi, it failed. But if using spark3-shell, same query can succeed. And in kyuubi, if I dont use the spark sql default catalog(spark_catalog), it can succeed. I'm not sure if anything special in Kyuubi?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058132609


   Some specific things about hive in Kyuubi
   1) In default, `kyuubi.engine.single.spark.session=false`, which means every new connection uses a new `SparkSession`
   2) Kyuubi has a default init sql `SHOW DATABASES`, it will cause hive client be initialized before you run your first sql.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] TangYan-1 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
TangYan-1 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058133832


   > In `spark-shell` or `spark-sql`, try query other tables before creating iceberg table, and see what happens.
   
   In spark-shell, after setting the spark_catalog iceberg class and type, I can query hive tables, but when I set them in kyuubi, I can not even query the no-iceberg tables due to the same error.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] TangYan-1 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
TangYan-1 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058114999


   But the same queries can succeed via spark3-shell in the same cluster, which I'm using spark-runtime-3.1_2.12-0.13.1.jar downloaded from iceberg website. And actually it is very tricky, when I use another string as the catalog name, not the built-in spark_catalog, the query can succeed.
   set spark.sql.catalog.aaa = org.apache.iceberg.spark.SparkSessionCatalog;
   set spark.sql.catalog.aaa.type = hive;
   set spark.sql.catalog.aaa.uri = thrift://hivemestore_host:9083;
   create table aaa.default.testtable(key int) using iceberg;


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 edited a comment on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 edited a comment on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058120148


   In `spark-shell` or `spark-sql`, try query other tables before creating iceberg table, and see what happens.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058106992


   The error looks reasonable, and I got reported that the open-source iceberg hive catalog is based on hive 2.3, which does not work with hive 2.1.1-cdh6 jars.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-kyuubi] pan3793 commented on issue #2005: [Bug] in Kyuubi spark sql, the built_in spark_catalog can't work when using iceberg.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #2005:
URL: https://github.com/apache/incubator-kyuubi/issues/2005#issuecomment-1058115302


   From the log, I think you add the following configurations in your `kyuubi-defaults.conf` or `spark-defauls.conf`
   ```
   spark.sql.hive.metastore.version=2.1.1
   spark.sql.hive.metastore.jars=/opt/cloudera/parcels/CDH/lib/hive/lib/*
   ```
   it indicates that spark use the hive 2.1.1-cdh6 client jars to communicate to the HMS, which generally reasonable, but you can try to remove these 2 confs, then spark will use the build-in hive 2.3 jars to do that, iceberg should work with it. 
   
   PS: The hive metastore client communicate to HMS through thrift protocol, I don't know if 2.3 client compatible with 2.1 server(and 2.1.1-cdh has some difference with the apache version), please be careful to try it in your production environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org