You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/04/13 07:23:21 UTC

[GitHub] [iceberg] qq564567484 opened a new issue #2468: master branch - flink sql create hive catalog error

qq564567484 opened a new issue #2468:
URL: https://github.com/apache/iceberg/issues/2468


   I build a jar file from master branch.
   when i create a hive catalog on flink sql client ,it comes a problem.
   
   **the SQL is :**
   ```
   drop catalog if exists iceberg_catalog;
   CREATE CATALOG iceberg_catalog WITH (
     'type'='iceberg',
     'catalog-type'='hive',
     'uri'='thrift://xxx1:9083,thrift://xxx2:9083',
     'clients'='5',
     'property-version'='1',
     'warehouse'='hdfs://hacluster/user/hive/warehouse'
   );
   ```
   
   **error info:**
   ```
   java.lang.IllegalArgumentException: Cannot initialize Catalog, org.apache.iceberg.hive.HiveCatalog does not implement Catalog.
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:176)
   	at org.apache.iceberg.flink.CatalogLoader$HiveCatalogLoader.loadCatalog(CatalogLoader.java:112)
   	at org.apache.iceberg.flink.FlinkCatalog.<init>(FlinkCatalog.java:110)
   	at org.apache.iceberg.flink.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:130)
   	at org.apache.iceberg.flink.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:117)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1085)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:1019)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:690)
   	at org.apache.zeppelin.flink.Flink111Shims.executeSql(Flink111Shims.java:374)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCreateCatalog(FlinkSqlInterrpeter.java:391)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:244)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.internalInterpret(FlinkSqlInterrpeter.java:111)
   	at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47)
   	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
   	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:808)
   	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:700)
   	at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
   	at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132)
   	at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:46)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   Caused by: java.lang.ClassCastException: org.apache.iceberg.hive.HiveCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:172)
   	... 22 more
   
   ```
   
   but  when i'm using  iceberg-flink-runtime-0.11.1.jar . it‘s  seems ok.
   could you tell me how to fix this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] caneGuy commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
caneGuy commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-897706224


   This may caused by:
   ```
   try {
           ctor = DynConstructors.builder(Catalog.class).impl(impl).buildChecked();
         } catch (NoSuchMethodException e) {
           throw new IllegalArgumentException(String.format(
                   "Cannot initialize Catalog implementation %s: %s", impl, e.getMessage()), e);
         }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] openinx commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-818543961


   That sounds unreasonable because apparently the `org.apache.iceberg.hive.HiveCatalog` has implemented the `org.apache.iceberg.catalog.Catalog` interface in master branch, see https://github.com/apache/iceberg/blob/988a33cb58981c3fabb221f6b49ed9dd176c9abd/hive-metastore/src/main/java/org/apache/iceberg/hive/HiveCatalog.java#L60.
   
   Is there any thing wrong ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-927508700


   in my flink cluster. it runs like this:
   
   
   Flink SQL> CREATE CATALOG flow_catalog WITH (
   >   'type'='iceberg',
   >   'catalog-type'='hadoop',
   >   'warehouse'='hdfs://namenode-206-10:8020/warehouse/flows',
   >   'property-version'='1'
   > );
   2021-09-27 11:56:43,219 WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
   [INFO] Catalog has been created.
   
   Flink SQL> use catalog flow_catalog;
   
   Flink SQL> CREATE DATABASE iceberg_db;
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.AlreadyExistsException: Namespace already exists: iceberg_db
   
   Flink SQL> drop database iceberg_db;
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.NamespaceNotEmptyException: Namespace iceberg_db is not empty.
   
   Flink SQL> use iceberg_db;
   
   Flink SQL> show tables;
   test
   
   Flink SQL> drop table test;
   [INFO] Table has been removed.
   
   Flink SQL> drop database iceberg_db;
   [INFO] Database has been removed.
   
   Flink SQL> CREATE DATABASE iceberg_db;
   [INFO] Database has been created.
   
   Flink SQL> USE iceberg_db;
   
   Flink SQL> CREATE TABLE test (
   >     id BIGINT COMMENT 'unique id',
   >     data STRING
   > );
   [INFO] Table has been created.
   
   Flink SQL> INSERT INTO test VALUES (1, 'a');
   [INFO] Submitting SQL update statement to the cluster...
   [INFO] Table update statement has been successfully submitted to the cluster:
   Job ID: 68bf513689c4920e2af3d49f4231945f
   
   
   Flink SQL> select * from test;
   [INFO] Result retrieval cancelled.
   
   Flink SQL> 
   > SET execution.type = batch;
   [INFO] Session property has been set.
   
   Flink SQL> select * from test;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL>  set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> select * from test;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL> 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-906226578


   I think I found the reason of this problem. in DynConstructors.java which borrowed from parquet-common, the ClassLoader in Builder is a loader of currentThread, looks like this
   ```
   public static class Builder {
       private final Class<?> baseClass;
       private ClassLoader loader = Thread.currentThread().getContextClassLoader();
       private Ctor ctor = null;
       private Map<String, Throwable> problems = new HashMap<String, Throwable>();
   
       public Builder(Class<?> baseClass) {
         this.baseClass = baseClass;
       }
   
       public Builder() {
         this.baseClass = null;
       }
   ......
   ```
   So,  when during runtime, the dynamic class is loaded by current thread, and the base class/interface is not  loaded by current thread. it will have the cast Exception.
   
   solution:
   ```
   public static class Builder {
       private final Class<?> baseClass;
       private ClassLoader loader = Thread.currentThread().getContextClassLoader();
       private Ctor ctor = null;
       private Map<String, Throwable> problems = new HashMap<String, Throwable>();
   
       public Builder(Class<?> baseClass) {
         this.baseClass = baseClass;
         this.loader = this.baseClass.getClassLoader();
       }
   
       public Builder() {
         this.baseClass = null;
       }
   ......
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] iQiuyu-0821 edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
iQiuyu-0821 edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-860508692


   I have the same problem.
   
   **runtime env**
   
   flink: 1.12.1
   iceberg-flink-runtime: 0.12.0 builder from master
   
   **catalog info**
   
   ```
   catalogs: 
       - name: iceberg
         type: iceberg
         catalog-type: hadoop
         warehouse: hdfs://localhost:9000/flink-iceberg/warehouse
         property-version: 1
         clients: 5
   ```
   
   **execute sql**
   
   ```
   Flink SQL> set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> set table.dynamic-table-options.enabled = true;
   [INFO] Session property has been set.
   
   Flink SQL> select * from iceberg.iceberg_db.t1 /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   ```
   
   **error info:**
   
   ```
   2021-06-14 16:35:57,838 INFO  org.apache.iceberg.BaseMetastoreCatalog                      [] - Table loaded by catalog: iceberg.iceberg_db.t1
   2021-06-14 16:35:59,910 WARN  org.apache.flink.table.client.cli.CliClient                  [] - Could not execute SQL statement.
   org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL query.
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:548) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_171]
   	at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   Caused by: java.lang.IllegalArgumentException: Cannot initialize Catalog, org.apache.iceberg.hadoop.HadoopCatalog does not implement Catalog.
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:186) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   Caused by: java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:182) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-897718055


   That does sound a lot like the runtime classpath has multiple implementations of HiveCatalog or HadoopCatalog. Are you sure that there aren't multiple iceberg-flink runtime jars?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-900375672


   I also have the same error.
   I am running a flink cluster of version 1.12.2. so I recompile iceberg with flink-1.12.2 and it compiled well.
   and I use flink sql client to test iceberg.
   the following command works fine:
   `
   CREATE CATALOG flow_catalog WITH (
     'type'='iceberg',
     'catalog-type'='hadoop',
     'warehouse'='hdfs://namenode-206-10:8020/warehouse/flows',
     'property-version'='1'
   );
   use catalog flow_catalog;
   CREATE DATABASE iceberg_db;
   USE iceberg_db;
   CREATE TABLE test (
       id BIGINT COMMENT 'unique id',
       data STRING
   );
   INSERT INTO test VALUES (1, 'a');
   select * from test;
   `
   but when I change the   **execution.type to batch**. there are the same exception.
   `
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:183)
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79)
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108)
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178)
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204)
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110)
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49)
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163)
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:44)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToTransformation(BatchExecLegacySink.scala:129)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:95)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:48)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlan(BatchExecLegacySink.scala:48)
   	at org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:86)
   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
   	at scala.collection.Iterator.foreach(Iterator.scala:937)
   	at scala.collection.Iterator.foreach$(Iterator.scala:937)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:70)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
   	at scala.collection.TraversableLike.map(TraversableLike.scala:233)
   	at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
   	at org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:85)
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:162)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321)
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328)
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287)
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256)
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282)
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542)
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374)
   	at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648)
   	at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323)
   	at java.util.Optional.ifPresent(Optional.java:159)
   	at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214)
   	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144)
   	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115)
   	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
   `
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] caneGuy edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
caneGuy edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-897706224


   This may be caused by:
   ```
   try {
           ctor = DynConstructors.builder(Catalog.class).impl(impl).buildChecked();
         } catch (NoSuchMethodException e) {
           throw new IllegalArgumentException(String.format(
                   "Cannot initialize Catalog implementation %s: %s", impl, e.getMessage()), e);
         }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
rdblue commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-927340358


   I agree with Russell. The problem here is that there are two copies of `Catalog` in the classpath and two different loaders. It looks like delegation between loaders is not happening correctly.
   
   @openinx or @stevenzwu, can you explain the classloader approach that Flink is taking? Why are there multiple classloaders here and why do they both have copies of `Catalog`?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] erictan90 commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
erictan90 commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-903623533


   having the same problem with flink-1.12.1 and runtime-0.12.0,
   switch back to flink-1.11.4 and runtime-0.11.1 then it's fine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] openinx commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-927451886


   @qq564567484 @iQiuyu-0821 @asnowfox @erictan90 ,  I tried to reproduce this issue in my host, by the following command but failed to get the same stacktrace as you said ( it works correctly): 
   
   ```
   #
   # This add mutiple iceberg flink runtime jar and mutiple iceberg catalog jars in the same classpath.
   # 
   ./bin/sql-client.sh embedded -j /Users/openinx/software/apache-iceberg/flink-runtime/build/libs/iceberg-flink-runtime-0.5.0-1598-g838cc65.jar \
   -j /Users/openinx/software/apache-iceberg/flink-runtime/build/libs/iceberg-flink-runtime-0.5.0-1610-gc492ec0.jar \
   -j /Users/openinx/software/apache-iceberg/flink/build/libs/iceberg-flink-0.5.0-1534-g868d7a2.jar \
   -j /Users/openinx/software/apache-iceberg/core/build/libs/iceberg-core-0.5.0-1534-g868d7a2.jar  \
   shell
   
   
   Flink SQL> CREATE TABLE iceberg_table (
   >   id  BIGINT,
   >   data STRING
   > ) WITH (
   >   'connector'='iceberg',
   >   'catalog-type'='hadoop',
   >   'catalog-name'='hadoop_catalog',
   >   'warehouse'='file:///Users/openinx/test/iceberg-warehouse'
   > );
   [INFO] Table has been created.
   
   Flink SQL> select * from iceberg_table;
   +----+------+
   | id | data |
   +----+------+
   |  1 |    d |
   |  1 |    b |
   |  1 |    a |
   |  1 |    a |
   |  1 |    c |
   +----+------+
   5 rows in set
   
   Flink SQL> insert into iceberg_table values (1, 'e');
   [INFO] Submitting SQL update statement to the cluster...
   [INFO] Table update statement has been successfully submitted to the cluster:
   Job ID: a223cbe4e5f32688bfc2577f2161b6ac
   
   Flink SQL> set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> insert into iceberg_table values (1, 'f');
   [INFO] Submitting SQL update statement to the cluster...
   [INFO] Table update statement has been successfully submitted to the cluster:
   Job ID: bbd88410d6bc8450b3beadcfe9659137
   
   Flink SQL> select * from iceberg_table;
   +----+------+
   | id | data |
   +----+------+
   |  1 |    d |
   |  1 |    b |
   |  1 |    e |
   |  1 |    a |
   |  1 |    f |
   |  1 |    a |
   |  1 |    c |
   +----+------+
   7 rows in set
   ```
   
   Could you show me the correct step how to reproduce this class loader issues ?  I'm glad to dig the reason why it breaks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] iQiuyu-0821 commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
iQiuyu-0821 commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-860508692


   I have the same problem.
   
   **runtime env**
   flink: 1.12.1
   iceberg-flink-runtime: 0.12.0 builder from master
   **catalog info**
   `
   catalogs: 
       - name: iceberg
         type: iceberg
         catalog-type: hadoop
         warehouse: hdfs://localhost:9000/flink-iceberg/warehouse
         property-version: 1
         clients: 5
   `
   
   **execute sql**
   `
   Flink SQL> set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> set table.dynamic-table-options.enabled = true;
   [INFO] Session property has been set.
   
   Flink SQL> select * from iceberg.iceberg_db.t1 /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL> 
   `
   **error info:**
   `2021-06-14 16:35:57,838 INFO  org.apache.iceberg.BaseMetastoreCatalog                      [] - Table loaded by catalog: iceberg.iceberg_db.t1
   2021-06-14 16:35:59,910 WARN  org.apache.flink.table.client.cli.CliClient                  [] - Could not execute SQL statement.
   org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL query.
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:548) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_171]
   	at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   Caused by: java.lang.IllegalArgumentException: Cannot initialize Catalog, org.apache.iceberg.hadoop.HadoopCatalog does not implement Catalog.
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:186) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   Caused by: java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:182) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-927508700


   in my flink cluster. it runs like this:
   ```
   Flink SQL> CREATE CATALOG flow_catalog WITH (
   >   'type'='iceberg',
   >   'catalog-type'='hadoop',
   >   'warehouse'='hdfs://namenode-206-10:8020/warehouse/flows',
   >   'property-version'='1'
   > );
   2021-09-27 11:56:43,219 WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
   [INFO] Catalog has been created.
   
   Flink SQL> use catalog flow_catalog;
   
   Flink SQL> CREATE DATABASE iceberg_db;
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.AlreadyExistsException: Namespace already exists: iceberg_db
   
   Flink SQL> drop database iceberg_db;
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.NamespaceNotEmptyException: Namespace iceberg_db is not empty.
   
   Flink SQL> use iceberg_db;
   
   Flink SQL> show tables;
   test
   
   Flink SQL> drop table test;
   [INFO] Table has been removed.
   
   Flink SQL> drop database iceberg_db;
   [INFO] Database has been removed.
   
   Flink SQL> CREATE DATABASE iceberg_db;
   [INFO] Database has been created.
   
   Flink SQL> USE iceberg_db;
   
   Flink SQL> CREATE TABLE test (
   >     id BIGINT COMMENT 'unique id',
   >     data STRING
   > );
   [INFO] Table has been created.
   
   Flink SQL> INSERT INTO test VALUES (1, 'a');
   [INFO] Submitting SQL update statement to the cluster...
   [INFO] Table update statement has been successfully submitted to the cluster:
   Job ID: 68bf513689c4920e2af3d49f4231945f
   
   
   Flink SQL> select * from test;
   [INFO] Result retrieval cancelled.
   
   Flink SQL> 
   > SET execution.type = batch;
   [INFO] Session property has been set.
   
   Flink SQL> select * from test;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL>  set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> select * from test;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL> 
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox commented on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox commented on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-906226578


   I think I found the reason of this problem. in DynConstructors.java which borrowed from parquet-common, the ClassLoader in Builder is a loader of currentThread, looks like this
   ```
   public static class Builder {
       private final Class<?> baseClass;
       private ClassLoader loader = Thread.currentThread().getContextClassLoader();
       private Ctor ctor = null;
       private Map<String, Throwable> problems = new HashMap<String, Throwable>();
   
       public Builder(Class<?> baseClass) {
         this.baseClass = baseClass;
       }
   
       public Builder() {
         this.baseClass = null;
       }
   ......
   ```
   So,  when during runtime, the dynamic class is loaded by current thread, and the base class/interface is not  loaded by current thread. it will have the cast Exception.
   
   solution:
   ```
   public static class Builder {
       private final Class<?> baseClass;
       private ClassLoader loader = Thread.currentThread().getContextClassLoader();
       private Ctor ctor = null;
       private Map<String, Throwable> problems = new HashMap<String, Throwable>();
   
       public Builder(Class<?> baseClass) {
         this.baseClass = baseClass;
         this.loader = this.baseClass.getClassLoader();
       }
   
       public Builder() {
         this.baseClass = null;
       }
   ......
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-927508700


   in my flink cluster. it runs like this:
   `
   Flink SQL> CREATE CATALOG flow_catalog WITH (
   >   'type'='iceberg',
   >   'catalog-type'='hadoop',
   >   'warehouse'='hdfs://namenode-206-10:8020/warehouse/flows',
   >   'property-version'='1'
   > );
   2021-09-27 11:56:43,219 WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
   [INFO] Catalog has been created.
   
   Flink SQL> use catalog flow_catalog;
   
   Flink SQL> CREATE DATABASE iceberg_db;
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.AlreadyExistsException: Namespace already exists: iceberg_db
   
   Flink SQL> drop database iceberg_db;
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.iceberg.exceptions.NamespaceNotEmptyException: Namespace iceberg_db is not empty.
   
   Flink SQL> use iceberg_db;
   
   Flink SQL> show tables;
   test
   
   Flink SQL> drop table test;
   [INFO] Table has been removed.
   
   Flink SQL> drop database iceberg_db;
   [INFO] Database has been removed.
   
   Flink SQL> CREATE DATABASE iceberg_db;
   [INFO] Database has been created.
   
   Flink SQL> USE iceberg_db;
   
   Flink SQL> CREATE TABLE test (
   >     id BIGINT COMMENT 'unique id',
   >     data STRING
   > );
   [INFO] Table has been created.
   
   Flink SQL> INSERT INTO test VALUES (1, 'a');
   [INFO] Submitting SQL update statement to the cluster...
   [INFO] Table update statement has been successfully submitted to the cluster:
   Job ID: 68bf513689c4920e2af3d49f4231945f
   
   
   Flink SQL> select * from test;
   [INFO] Result retrieval cancelled.
   
   Flink SQL> 
   > SET execution.type = batch;
   [INFO] Session property has been set.
   
   Flink SQL> select * from test;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL>  set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> select * from test;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   
   Flink SQL> 
   
   `
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] asnowfox edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
asnowfox edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-900375672


   I also have the same error.
   I am running a flink cluster of version 1.12.2. so I recompile iceberg with flink-1.12.2 and it compiled well.
   and I use flink sql client to test iceberg.
   the following command works fine:
   ```
   CREATE CATALOG flow_catalog WITH (
     'type'='iceberg',
     'catalog-type'='hadoop',
     'warehouse'='hdfs://namenode-206-10:8020/warehouse/flows',
     'property-version'='1'
   );
   use catalog flow_catalog;
   CREATE DATABASE iceberg_db;
   USE iceberg_db;
   CREATE TABLE test (
       id BIGINT COMMENT 'unique id',
       data STRING
   );
   INSERT INTO test VALUES (1, 'a');
   select * from test;
   ```
   but when I change the   **execution.type to batch**. there are the same exception.
   ```
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:183)
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79)
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108)
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178)
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204)
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110)
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49)
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163)
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:44)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToTransformation(BatchExecLegacySink.scala:129)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:95)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:48)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
   	at org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlan(BatchExecLegacySink.scala:48)
   	at org.apache.flink.table.planner.delegation.BatchPlanner.$anonfun$translateToPlan$1(BatchPlanner.scala:86)
   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
   	at scala.collection.Iterator.foreach(Iterator.scala:937)
   	at scala.collection.Iterator.foreach$(Iterator.scala:937)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:70)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
   	at scala.collection.TraversableLike.map(TraversableLike.scala:233)
   	at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
   	at org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:85)
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:162)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321)
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328)
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287)
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256)
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282)
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542)
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374)
   	at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648)
   	at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323)
   	at java.util.Optional.ifPresent(Optional.java:159)
   	at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214)
   	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144)
   	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115)
   	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] iQiuyu-0821 edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
iQiuyu-0821 edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-860508692


   I have the same problem.
   
   **runtime env:**
   
   flink: 1.12.1
   iceberg-flink-runtime: 0.12.0 build from master
   
   **catalog info:**
   
   ```
   catalogs: 
       - name: iceberg
         type: iceberg
         catalog-type: hadoop
         warehouse: hdfs://localhost:9000/flink-iceberg/warehouse
         property-version: 1
         clients: 5
   ```
   
   **execute sql:**
   
   ```
   Flink SQL> set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> set table.dynamic-table-options.enabled = true;
   [INFO] Session property has been set.
   
   Flink SQL> select * from iceberg.iceberg_db.t1 /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   ```
   
   **error info:**
   
   ```
   2021-06-14 16:35:57,838 INFO  org.apache.iceberg.BaseMetastoreCatalog                      [] - Table loaded by catalog: iceberg.iceberg_db.t1
   2021-06-14 16:35:59,910 WARN  org.apache.flink.table.client.cli.CliClient                  [] - Could not execute SQL statement.
   org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL query.
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:548) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_171]
   	at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   Caused by: java.lang.IllegalArgumentException: Cannot initialize Catalog, org.apache.iceberg.hadoop.HadoopCatalog does not implement Catalog.
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:186) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   Caused by: java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:182) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] iQiuyu-0821 edited a comment on issue #2468: master branch - flink sql create hive catalog error

Posted by GitBox <gi...@apache.org>.
iQiuyu-0821 edited a comment on issue #2468:
URL: https://github.com/apache/iceberg/issues/2468#issuecomment-860508692


   I have the same problem.
   
   **runtime env:**
   
   flink: 1.12.1
   iceberg-flink-runtime: 0.12.0 builder from master
   
   **catalog info:**
   
   ```
   catalogs: 
       - name: iceberg
         type: iceberg
         catalog-type: hadoop
         warehouse: hdfs://localhost:9000/flink-iceberg/warehouse
         property-version: 1
         clients: 5
   ```
   
   **execute sql:**
   
   ```
   Flink SQL> set execution.type = streaming;
   [INFO] Session property has been set.
   
   Flink SQL> set table.dynamic-table-options.enabled = true;
   [INFO] Session property has been set.
   
   Flink SQL> select * from iceberg.iceberg_db.t1 /*+ OPTIONS('streaming'='true', 'monitor-interval'='1s')*/;
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   ```
   
   **error info:**
   
   ```
   2021-06-14 16:35:57,838 INFO  org.apache.iceberg.BaseMetastoreCatalog                      [] - Table loaded by catalog: iceberg.iceberg_db.t1
   2021-06-14 16:35:59,910 WARN  org.apache.flink.table.client.cli.CliClient                  [] - Could not execute SQL statement.
   org.apache.flink.table.client.gateway.SqlExecutionException: Invalid SQL query.
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:548) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:374) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:648) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:323) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_171]
   	at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:214) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:144) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.start(SqlClient.java:115) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) [flink-sql-client_2.11-1.12.1.jar:1.12.1]
   Caused by: java.lang.IllegalArgumentException: Cannot initialize Catalog, org.apache.iceberg.hadoop.HadoopCatalog does not implement Catalog.
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:186) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   Caused by: java.lang.ClassCastException: org.apache.iceberg.hadoop.HadoopCatalog cannot be cast to org.apache.iceberg.catalog.Catalog
   	at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:182) ~[?:?]
   	at org.apache.iceberg.flink.CatalogLoader$HadoopCatalogLoader.loadCatalog(CatalogLoader.java:79) ~[?:?]
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.buildFormat(FlinkSource.java:178) ~[?:?]
   	at org.apache.iceberg.flink.source.FlinkSource$Builder.build(FlinkSource.java:204) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.createDataStream(IcebergTableSource.java:110) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource.access$000(IcebergTableSource.java:49) ~[?:?]
   	at org.apache.iceberg.flink.IcebergTableSource$1.produceDataStream(IcebergTableSource.java:163) ~[?:?]
   	at org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:91) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:44) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[flink-dist_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:328) ~[flink-table-blink_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:287) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:256) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:282) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	at org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:542) ~[flink-sql-client_2.11-1.12.1.jar:1.12.1]
   	... 8 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org