You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2019/07/08 17:49:00 UTC

[jira] [Closed] (FLINK-13152) Unable to run query when using HiveCatalog and DataSet api

     [ https://issues.apache.org/jira/browse/FLINK-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xuefu Zhang closed FLINK-13152.
-------------------------------
    Resolution: Won't Fix

> Unable to run query when using HiveCatalog and DataSet api
> ----------------------------------------------------------
>
>                 Key: FLINK-13152
>                 URL: https://issues.apache.org/jira/browse/FLINK-13152
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: 1.9.0
>            Reporter: Jeff Zhang
>            Priority: Major
>
> {code:java}
> ERROR [2019-07-08 22:09:22,200] ({ParallelScheduler-Worker-1} FlinkSqlInterrpeter.java[runSqlList]:107) - Fail to run sql:select * from a
> org.apache.flink.table.api.TableException: Cannot generate a valid execution plan for the given query:
> FlinkLogicalTableSourceScan(table=[[hive, default, a]], fields=[tt], source=[HiveTableSource(tt)])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL features.
> at org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:245)
> at org.apache.flink.table.plan.Optimizer.optimizePhysicalPlan(Optimizer.scala:170)
> at org.apache.flink.table.plan.BatchOptimizer.optimize(BatchOptimizer.scala:57)
> at org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:258)
> at org.apache.flink.table.api.scala.internal.BatchTableEnvironmentImpl.toDataSet(BatchTableEnvironmentImpl.scala:66){code}
> This is the exception I hit, and the below is code to reproduce the issue.
> I suspect it is because I am using DataSet and HiveCatalog together
> {code:java}
> def showTable(table: Table): String = {
>   val columnNames: Array[String] = table.getSchema.getFieldNames
>   val dsRow: DataSet[Row] = btenv.toDataSet[Row](table)
>   val rows = dsRow.first(maxResult).collect()
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)