You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/05/28 02:05:24 UTC
[GitHub] [iceberg] hashmapybx opened a new issue #2647: flink+iceberg start on flink sql client , there are some error.
hashmapybx opened a new issue #2647:
URL: https://github.com/apache/iceberg/issues/2647
Flink SQL> CREATE CATALOG hive_catalog WITH (
> 'type'='iceberg',
> 'catalog-type'='hive',
> 'uri'='thrift://localhost:9083',
> 'clients'='5',
> 'property-version'='1',
> 'warehouse'='hdfs://localhost:9000/user/hive/warehouse'
> );
2021-05-27 21:59:18,940 INFO org.apache.hadoop.hive.conf.HiveConf [] - Found configuration file null
[INFO] Catalog has been created.
Flink SQL> use catalog hive_catalog;
Flink SQL> show tables;
[INFO] Result was empty.
Flink SQL> show databases;
default
iceberg_db
Flink SQL> use iceberg_db;
Flink SQL> show tables;
iceberg_001
sample
sourcetable
stu
Flink SQL> select * from sourcetable;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:362)
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not instantiate JobManager.
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:427)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 6 more
Caused by: org.apache.flink.runtime.JobException: Creating the input splits caused an error: Stack map does not match the one at exception handler 70
Exception Details:
Location:
org/apache/iceberg/hive/HiveCatalog.loadNamespaceMetadata(Lorg/apache/iceberg/catalog/Namespace;)Ljava/util/Map; @70: astore_2
Reason:
Type 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' (current frame, stack[0]) is not assignable to 'org/apache/thrift/TException' (stack map, stack[0])
Current Frame:
bci: @27
flags: { }
locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
stack: { 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' }
Stackmap Frame:
bci: @70
flags: { }
locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
stack: { 'org/apache/thrift/TException' }
Bytecode:
0x0000000: 2a2b b700 c59a 0016 bb01 2c59 1301 2e04
0x0000010: bd01 3059 032b 53b7 0133 bf2a b400 3e2b
0x0000020: ba02 8e00 00b6 00e8 c002 904d 2a2c b702
0x0000030: 944e b201 2213 0296 2b2d b902 5d01 00b9
0x0000040: 012a 0400 2db0 4dbb 012c 592c 1301 2e04
0x0000050: bd01 3059 032b 53b7 0281 bf4d bb01 3559
0x0000060: bb01 3759 b701 3813 0283 b601 3e2b b601
0x0000070: 4113 0208 b601 3eb6 0144 2cb7 0147 bf4d
0x0000080: b800 46b6 014a bb01 3559 bb01 3759 b701
0x0000090: 3813 0285 b601 3e2b b601 4113 0208 b601
0x00000a0: 3eb6 0144 2cb7 0147 bf
Exception Handler Table:
bci [27, 69] => handler: 70
bci [27, 69] => handler: 70
bci [27, 69] => handler: 91
bci [27, 69] => handler: 127
Stackmap Table:
same_frame(@27)
same_locals_1_stack_item_frame(@70,Object[#191])
same_locals_1_stack_item_frame(@91,Object[#191])
same_locals_1_stack_item_frame(@127,Object[#193])
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:272)
at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:814)
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:228)
at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:270)
at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:244)
at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:231)
at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:119)
at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:103)
at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:290)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:140)
at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:417)
... 7 more
Caused by: java.lang.VerifyError: Stack map does not match the one at exception handler 70
Exception Details:
Location:
org/apache/iceberg/hive/HiveCatalog.loadNamespaceMetadata(Lorg/apache/iceberg/catalog/Namespace;)Ljava/util/Map; @70: astore_2
Reason:
Type 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' (current frame, stack[0]) is not assignable to 'org/apache/thrift/TException' (stack map, stack[0])
Current Frame:
bci: @27
flags: { }
locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
stack: { 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' }
Stackmap Frame:
bci: @70
flags: { }
locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
stack: { 'org/apache/thrift/TException' }
Bytecode:
0x0000000: 2a2b b700 c59a 0016 bb01 2c59 1301 2e04
0x0000010: bd01 3059 032b 53b7 0133 bf2a b400 3e2b
0x0000020: ba02 8e00 00b6 00e8 c002 904d 2a2c b702
0x0000030: 944e b201 2213 0296 2b2d b902 5d01 00b9
0x0000040: 012a 0400 2db0 4dbb 012c 592c 1301 2e04
0x0000050: bd01 3059 032b 53b7 0281 bf4d bb01 3559
0x0000060: bb01 3759 b701 3813 0283 b601 3e2b b601
0x0000070: 4113 0208 b601 3eb6 0144 2cb7 0147 bf4d
0x0000080: b800 46b6 014a bb01 3559 bb01 3759 b701
0x0000090: 3813 0285 b601 3e2b b601 4113 0208 b601
0x00000a0: 3eb6 0144 2cb7 0147 bf
Exception Handler Table:
bci [27, 69] => handler: 70
bci [27, 69] => handler: 70
bci [27, 69] => handler: 91
bci [27, 69] => handler: 127
Stackmap Table:
same_frame(@27)
same_locals_1_stack_item_frame(@70,Object[#191])
same_locals_1_stack_item_frame(@91,Object[#191])
same_locals_1_stack_item_frame(@127,Object[#193])
at org.apache.iceberg.flink.CatalogLoader$HiveCatalogLoader.loadCatalog(CatalogLoader.java:112)
at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108)
at org.apache.iceberg.flink.source.FlinkInputFormat.createInputSplits(FlinkInputFormat.java:76)
at org.apache.iceberg.flink.source.FlinkInputFormat.createInputSplits(FlinkInputFormat.java:40)
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:258)
... 21 more
End of exception on server side>]
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] miaowenting commented on issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
miaowenting commented on issue #2647:
URL: https://github.com/apache/iceberg/issues/2647#issuecomment-851938781
open your sql-client.sh , add -noverify after $JVM_ARGS at two places to skip bytecode check . you can try .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] hashmapybx closed issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
hashmapybx closed issue #2647:
URL: https://github.com/apache/iceberg/issues/2647
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] malloy-J commented on issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
malloy-J commented on issue #2647:
URL: https://github.com/apache/iceberg/issues/2647#issuecomment-853023823
how do you fix this issue?I get same error, please give me some advice
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] openinx commented on issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2647:
URL: https://github.com/apache/iceberg/issues/2647#issuecomment-851958766
I think the root cause is : we did not use the correct version for apache iceberg, apache flink, apache hive. For example, if you choose to use hive-2.3.6 & flink 1.11.3, then we will need to build our own iceberg-flink-runtime-xx jar based on the hive-2.3.6 & flink-1.11.3, and finally we use this generated jar to run the flink jobs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] liubo1022126 commented on issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
liubo1022126 commented on issue #2647:
URL: https://github.com/apache/iceberg/issues/2647#issuecomment-851497990
me too
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] hashmapybx commented on issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
hashmapybx commented on issue #2647:
URL: https://github.com/apache/iceberg/issues/2647#issuecomment-851961500
> I think the root cause is : we did not use the correct version for apache iceberg, apache flink, apache hive. For example, if you choose to use hive-2.3.6 & flink 1.11.3, then we will need to build our own iceberg-flink-runtime-xx jar based on the hive-2.3.6 & flink-1.11.3, and finally we use this generated jar to run the flink jobs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org
[GitHub] [iceberg] hashmapybx commented on issue #2647: flink+iceberg start on flink sql client , there are some error.
Posted by GitBox <gi...@apache.org>.
hashmapybx commented on issue #2647:
URL: https://github.com/apache/iceberg/issues/2647#issuecomment-850058902
my development environment is:
apache-hive-2.3.6-bin
flink-1.11.3
iceberg-flink-runtime-0.11.1.jar
flink-sql-connector-hive-2.3.6_2.11-1.11.0.jar
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org