You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/01/22 08:06:54 UTC

[GitHub] [iceberg] openinx opened a new issue #2133: Flink: it will throw exception when executing 'CREATE TABLE IF NOT EXITS'

openinx opened a new issue #2133:
URL: https://github.com/apache/iceberg/issues/2133


   ```java
   Failed to submit statement `USE CATALOG iceberg_catalog;
   USE iceberg_db;
   
   CREATE TABLE IF NOT EXISTS lineitem (
     l_orderkey bigint,
     l_partkey bigint,
     l_suppkey bigint,
     l_linenumber bigint,
     l_quantity double,
     l_extendedprice double,
     l_discount double,
     l_tax double,
     l_returnflag string,
     l_linestatus string,
     l_shipdate string,
     l_commitdate string,
     l_receiptdate string,
     l_shipinstruct string,
     l_shipmode string,
     l_comment string)
   WITH (
     'write.format.default' = 'parquet'
   );
   ` to server
     at com.ververica.flink.table.gateway.client.SessionClient.submitStatement(SessionClient.java:169)
     at com.ververica.flink.table.gateway.client.SqlGatewayClient.runInternal(SqlGatewayClient.java:123)
     at com.ververica.flink.table.gateway.client.SqlGatewayClient.run(SqlGatewayClient.java:111)
     at com.ververica.flink.table.gateway.client.SqlGatewayClient.parseParameters(SqlGatewayClient.java:75)
     at com.ververica.flink.table.gateway.client.SqlGatewayClient.main(SqlGatewayClient.java:53)
   Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
   org.apache.flink.table.api.ValidationException: Could not execute CreateTable in path `iceberg_catalog`.`iceberg_db`.`lineitem`
     at org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:793)
     at org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:631)
     at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:768)
     at com.ververica.flink.table.gateway.operation.FlinkOperationWrapper.lambda$execute$0(FlinkOperationWrapper.java:47)
     at com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:130)
     at com.ververica.flink.table.gateway.operation.FlinkOperationWrapper.execute(FlinkOperationWrapper.java:47)
     at com.ververica.flink.table.gateway.operation.executor.MultipleSinksOperationExecutor.execute(MultipleSinksOperationExecutor.java:72)
     at com.ververica.flink.table.gateway.rest.session.Session.lambda$runStatement$1(Session.java:115)
     at com.ververica.flink.table.gateway.utils.EnvironmentUtil.lambda$wrapWithHadoopUsernameIfNeeded$0(EnvironmentUtil.java:57)
     at com.ververica.flink.table.gateway.utils.EnvironmentUtil.wrapWithHadoopUsernameIfNeeded(EnvironmentUtil.java:65)
     at com.ververica.flink.table.gateway.utils.EnvironmentUtil.wrapWithHadoopUsernameIfNeeded(EnvironmentUtil.java:56)
     at com.ververica.flink.table.gateway.rest.session.Session.runStatement(Session.java:114)
     at com.ververica.flink.table.gateway.rest.handler.StatementExecuteHandler.handleRequest(StatementExecuteHandler.java:83)
     at com.ververica.flink.table.gateway.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:85)
     at com.ververica.flink.table.gateway.rest.handler.AbstractHandler.channelRead0(AbstractHandler.java:184)
     at com.ververica.flink.table.gateway.rest.handler.AbstractHandler.channelRead0(AbstractHandler.java:76)
     at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
     at org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:110)
     at org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:89)
     at org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:54)
     at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
     at org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
     at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:177)
     at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:69)
     at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
     at org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
     at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
     at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
     at org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
     at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
     at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
     at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
     at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
     at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
     at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
     at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
     at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
     at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
     at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
     at java.lang.Thread.run(Thread.java:882)
   Caused by: org.apache.flink.table.catalog.exceptions.TableAlreadyExistException: Table (or view) iceberg_db.lineitem already exists in Catalog iceberg_catalog.
     at org.apache.iceberg.flink.FlinkCatalog.createTable(FlinkCatalog.java:379)
     at org.apache.flink.table.catalog.CatalogManager.lambda$createTable$10(CatalogManager.java:632)
     at org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:791)
     ... 55 more
   Caused by: org.apache.iceberg.exceptions.AlreadyExistsException: Table already exists: iceberg_db.lineitem
     at org.apache.iceberg.BaseMetastoreCatalog$BaseMetastoreCatalogTableBuilder.create(BaseMetastoreCatalog.java:207)
     at org.apache.iceberg.CachingCatalog$CachingTableBuilder.lambda$create$0(CachingCatalog.java:212)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2337)
     at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2335)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2318)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:111)
     at org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:54)
     at org.apache.iceberg.CachingCatalog$CachingTableBuilder.create(CachingCatalog.java:210)
     at org.apache.iceberg.CachingCatalog.createTable(CachingCatalog.java:106)
     at org.apache.iceberg.flink.FlinkCatalog.createTable(FlinkCatalog.java:372)
     ... 57 more
   
   End of exception on server side>]
     at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
     at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
     at com.ververica.flink.table.gateway.client.SessionClient.submitStatement(SessionClient.java:167)
     ... 4 more
   Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
   org.apache.flink.table.api.ValidationException: Could not execute CreateTable in path `iceberg_catalog`.`iceberg_db`.`lineitem`
     at org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:793)
     at org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:631)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] zhangjun0x01 commented on issue #2133: Flink: it will throw exception when executing 'CREATE TABLE IF NOT EXITS'

Posted by GitBox <gi...@apache.org>.
zhangjun0x01 commented on issue #2133:
URL: https://github.com/apache/iceberg/issues/2133#issuecomment-765230938


   Currently flink syntax does not support `CREATE TABLE IF NOT EXISTS` ([doc](https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/sql/create.html#create-table)), I think it should be a flink issue, we should create a flink issue to impove it.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on issue #2133: Flink: it will throw exception when executing 'CREATE TABLE IF NOT EXITS'

Posted by GitBox <gi...@apache.org>.
rdblue commented on issue #2133:
URL: https://github.com/apache/iceberg/issues/2133#issuecomment-765691828


   Merged #2135.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue closed issue #2133: Flink: it will throw exception when executing 'CREATE TABLE IF NOT EXITS'

Posted by GitBox <gi...@apache.org>.
rdblue closed issue #2133:
URL: https://github.com/apache/iceberg/issues/2133


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] openinx commented on issue #2133: Flink: it will throw exception when executing 'CREATE TABLE IF NOT EXITS'

Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2133:
URL: https://github.com/apache/iceberg/issues/2133#issuecomment-765291885


   @zhangjun0x01 ,  we flink 1.11.x does not support `CREATE TABLE IF NOT EXISTS` (see: [here](https://github.com/apache/flink/blob/release-1.11/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl#L723) ),  but flink 1.12.x starts to support it ( see:  [here](https://github.com/apache/flink/blob/release-1.12/flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl#L802) ).  After upgraded the flink from 1.11.0 to 1.12.x,  the `CREATE TABLE IF NOT EXISTS` is still not working  because we've ignored the `ignoreIfExists` in [FlinkCatalog#createTable](https://github.com/apache/iceberg/pull/2135/files#diff-0ae3022739b9b55183c9aa972fadb7cdb9cb3956468f14559b3c57c5900f3953R383).  It's a bug in flink iceberg catalog intergration work so I'd prefer to fix this before the next release.  
   
   I've created a PR to handle this https://github.com/apache/iceberg/pull/2135.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org