You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Zhenghua Gao (JIRA)" <ji...@apache.org> on 2019/06/27 05:26:00 UTC

[jira] [Updated] (FLINK-12999) Can't generate valid execution plan for "SELECT uuid() FROM (VALUES(1,2,3)) T(a, b, c)"

     [ https://issues.apache.org/jira/browse/FLINK-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhenghua Gao updated FLINK-12999:
---------------------------------
    Summary: Can't generate valid execution plan for "SELECT uuid() FROM (VALUES(1,2,3)) T(a, b, c)"  (was: Can't generate valid execution plan for "SELECT uuid() FROM VALUES(1,2,3) T(a, b, c)")

> Can't generate valid execution plan for "SELECT uuid() FROM (VALUES(1,2,3)) T(a, b, c)"
> ---------------------------------------------------------------------------------------
>
>                 Key: FLINK-12999
>                 URL: https://issues.apache.org/jira/browse/FLINK-12999
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Planner
>    Affects Versions: 1.9.0
>            Reporter: Zhenghua Gao
>            Assignee: godfrey he
>            Priority: Major
>
> The ERROR message is: 
>  
> org.apache.flink.table.api.TableException: Cannot generate a valid execution plan for the given query:
> LogicalSink(fields=[EXPR$0])
> +- LogicalProject(EXPR$0=[UUID()])
>  +- LogicalValues(tuples=[[\{ 1, 2, 3 }]])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL features.
> at org.apache.flink.table.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:72)
>  at org.apache.flink.table.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at org.apache.flink.table.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>  at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>  at org.apache.flink.table.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at org.apache.flink.table.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:82)
>  at org.apache.flink.table.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:51)
>  at org.apache.flink.table.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:39)
>  at org.apache.flink.table.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:39)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at org.apache.flink.table.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:39)
>  at org.apache.flink.table.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:65)
>  at org.apache.flink.table.api.TableEnvironment.optimize(TableEnvironment.scala:251)
>  at org.apache.flink.table.api.TableEnvironment.compileToExecNodePlan(TableEnvironment.scala:200)
>  at org.apache.flink.table.api.TableEnvironment.compile(TableEnvironment.scala:184)
>  at org.apache.flink.table.api.TableEnvironment.generateStreamGraph(TableEnvironment.scala:155)
>  at org.apache.flink.table.api.BatchTableEnvironment.execute(BatchTableEnvironment.scala:93)
>  at org.apache.flink.table.api.TableEnvironment.execute(TableEnvironment.scala:136)
>  at org.apache.flink.table.runtime.utils.BatchTableEnvUtil$.collect(BatchTableEnvUtil.scala:55)
>  at org.apache.flink.table.runtime.utils.TableUtil$.collectSink(TableUtil.scala:60)
>  at org.apache.flink.table.runtime.utils.TableUtil$.collect(TableUtil.scala:41)
>  at org.apache.flink.table.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:308)
>  at org.apache.flink.table.runtime.utils.BatchTestBase.check(BatchTestBase.scala:164)
>  at org.apache.flink.table.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:103)
>  at org.apache.flink.table.runtime.batch.sql.ValuesITCase.test(ValuesITCase.scala:38)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>  at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=LOGICAL, FlinkRelDistributionTraitDef=any, sort=[].
> Missing conversion is LogicalProject[convention: NONE -> LOGICAL]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)