You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2020/12/31 10:28:04 UTC

[GitHub] [iceberg] masonone opened a new issue #2015: I got this error using iceberg, I want to know what caused it.

masonone opened a new issue #2015:
URL: https://github.com/apache/iceberg/issues/2015


   ```
   Fail to run sql command: SELECT * FROM sample
   org.apache.flink.table.api.TableException: Failed to execute sql
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:749)
   	at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:570)
   	at org.apache.zeppelin.flink.Flink111Shims.collectToList(Flink111Shims.java:174)
   	at org.apache.zeppelin.flink.FlinkZeppelinContext.showData(FlinkZeppelinContext.scala:115)
   	at org.apache.zeppelin.interpreter.ZeppelinContext.showData(ZeppelinContext.java:67)
   	at org.apache.zeppelin.flink.FlinkBatchSqlInterpreter.callInnerSelect(FlinkBatchSqlInterpreter.java:60)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:494)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:265)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:159)
   	at org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:124)
   	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110)
   	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:776)
   	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
   	at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
   	at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
   	at org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   Caused by: org.apache.flink.util.FlinkException: Failed to execute job 'collect'.
   	at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1829)
   	at org.apache.flink.api.java.ScalaShellStreamEnvironment.executeAsync(ScalaShellStreamEnvironment.java:75)
   	at org.apache.flink.table.planner.delegation.ExecutorBase.executeAsync(ExecutorBase.java:57)
   	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:738)
   	... 18 more
   Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
   	at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:366)
   	at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
   	at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
   	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
   	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
   	at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:292)
   	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
   	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
   	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
   	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
   	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
   	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
   	... 3 more
   Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
   org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
   	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:362)
   	at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
   	at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
   	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
   	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
   	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
   	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
   	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
   	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
   	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
   Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not instantiate JobManager.
   	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:427)
   	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
   	... 6 more
   Caused by: org.apache.flink.runtime.JobException: Creating the input splits caused an error: Stack map does not match the one at exception handler 69
   Exception Details:
     Location:
       org/apache/iceberg/hive/HiveCatalog.loadNamespaceMetadata(Lorg/apache/iceberg/catalog/Namespace;)Ljava/util/Map; @69: astore_2
     Reason:
       Type 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' (current frame, stack[0]) is not assignable to 'org/apache/thrift/TException' (stack map, stack[0])
     Current Frame:
       bci: @26
       flags: { }
       locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
       stack: { 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' }
     Stackmap Frame:
       bci: @69
       flags: { }
       locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
       stack: { 'org/apache/thrift/TException' }
     Bytecode:
       0x0000000: 2a2b b700 759a 0015 bb00 c759 12c9 04bd
       0x0000010: 00cb 5903 2b53 b700 cebf 2ab4 0038 2bba
       0x0000020: 0236 0000 b600 9ac0 0238 4d2a 2cb7 023c
       0x0000030: 4eb2 00bd 1302 3e2b 2db9 0204 0100 b900
       0x0000040: c504 002d b04d bb00 c759 2c12 c904 bd00
       0x0000050: cb59 032b 53b7 0229 bf4d bb00 d059 bb00
       0x0000060: d259 b700 d313 022b b600 d92b b600 dc13
       0x0000070: 01a4 b600 d9b6 00e0 2cb7 00e3 bf4d b800
       0x0000080: 40b6 00e6 bb00 d059 bb00 d259 b700 d313
       0x0000090: 022d b600 d92b b600 dc13 01a4 b600 d9b6
       0x00000a0: 00e0 2cb7 00e3 bf                      
     Exception Handler Table:
       bci [26, 68] => handler: 69
       bci [26, 68] => handler: 69
       bci [26, 68] => handler: 89
       bci [26, 68] => handler: 125
     Stackmap Table:
       same_frame(@26)
       same_locals_1_stack_item_frame(@69,Object[#111])
       same_locals_1_stack_item_frame(@89,Object[#111])
       same_locals_1_stack_item_frame(@125,Object[#113])
   
   	at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:272)
   	at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:814)
   	at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:228)
   	at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:270)
   	at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:244)
   	at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:231)
   	at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:119)
   	at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:103)
   	at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:290)
   	at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:278)
   	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
   	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
   	at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:140)
   	at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
   	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:417)
   	... 7 more
   Caused by: java.lang.VerifyError: Stack map does not match the one at exception handler 69
   Exception Details:
     Location:
       org/apache/iceberg/hive/HiveCatalog.loadNamespaceMetadata(Lorg/apache/iceberg/catalog/Namespace;)Ljava/util/Map; @69: astore_2
     Reason:
       Type 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' (current frame, stack[0]) is not assignable to 'org/apache/thrift/TException' (stack map, stack[0])
     Current Frame:
       bci: @26
       flags: { }
       locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
       stack: { 'org/apache/hadoop/hive/metastore/api/NoSuchObjectException' }
     Stackmap Frame:
       bci: @69
       flags: { }
       locals: { 'org/apache/iceberg/hive/HiveCatalog', 'org/apache/iceberg/catalog/Namespace' }
       stack: { 'org/apache/thrift/TException' }
     Bytecode:
       0x0000000: 2a2b b700 759a 0015 bb00 c759 12c9 04bd
       0x0000010: 00cb 5903 2b53 b700 cebf 2ab4 0038 2bba
       0x0000020: 0236 0000 b600 9ac0 0238 4d2a 2cb7 023c
       0x0000030: 4eb2 00bd 1302 3e2b 2db9 0204 0100 b900
       0x0000040: c504 002d b04d bb00 c759 2c12 c904 bd00
       0x0000050: cb59 032b 53b7 0229 bf4d bb00 d059 bb00
       0x0000060: d259 b700 d313 022b b600 d92b b600 dc13
       0x0000070: 01a4 b600 d9b6 00e0 2cb7 00e3 bf4d b800
       0x0000080: 40b6 00e6 bb00 d059 bb00 d259 b700 d313
       0x0000090: 022d b600 d92b b600 dc13 01a4 b600 d9b6
       0x00000a0: 00e0 2cb7 00e3 bf                      
     Exception Handler Table:
       bci [26, 68] => handler: 69
       bci [26, 68] => handler: 69
       bci [26, 68] => handler: 89
       bci [26, 68] => handler: 125
     Stackmap Table:
       same_frame(@26)
       same_locals_1_stack_item_frame(@69,Object[#111])
       same_locals_1_stack_item_frame(@89,Object[#111])
       same_locals_1_stack_item_frame(@125,Object[#113])
   
   	at org.apache.iceberg.flink.CatalogLoader$HiveCatalogLoader.loadCatalog(CatalogLoader.java:95)
   	at org.apache.iceberg.flink.TableLoader$CatalogTableLoader.open(TableLoader.java:108)
   	at org.apache.iceberg.flink.source.FlinkInputFormat.createInputSplits(FlinkInputFormat.java:75)
   	at org.apache.iceberg.flink.source.FlinkInputFormat.createInputSplits(FlinkInputFormat.java:40)
   	at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:258)
   	... 21 more
   
   End of exception on server side>]
   	at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:390)
   	at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:374)
   	at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
   	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
   	... 4 more
   ```
   version: 
   flink 1.11.3
   hive 3.1.2
   iceberg-flink-runtime-0.10.0
   
   description:
   I got this error using iceberg, I want to know what caused it.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] masonone commented on issue #2015: I got this error using iceberg, I want to know what caused it.

Posted by GitBox <gi...@apache.org>.
masonone commented on issue #2015:
URL: https://github.com/apache/iceberg/issues/2015#issuecomment-760580927


   > It looks like same as this:
   > #2057
   
   OK , thanks


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] dixingxing0 commented on issue #2015: I got this error using iceberg, I want to know what caused it.

Posted by GitBox <gi...@apache.org>.
dixingxing0 commented on issue #2015:
URL: https://github.com/apache/iceberg/issues/2015#issuecomment-758602457


   It looks like same as this:
   https://github.com/apache/iceberg/issues/2057


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] masonone closed issue #2015: I got this error using iceberg, I want to know what caused it.

Posted by GitBox <gi...@apache.org>.
masonone closed issue #2015:
URL: https://github.com/apache/iceberg/issues/2015


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org