You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Daniel Haviv <da...@gmail.com> on 2017/02/08 13:13:36 UTC

[Spark-SQL] Hive support is required to select over the following tables

Hi,
I'm using Spark 2.1.0 on Zeppelin.

I can successfully create a table but when I try to select from it I fail:
spark.sql("create table foo (name string)")
res0: org.apache.spark.sql.DataFrame = []

spark.sql("select * from foo")

org.apache.spark.sql.AnalysisException:
Hive support is required to select over the following tables:
`default`.`zibi`
;;
'Project [*]
+- 'SubqueryAlias foo
+- 'SimpleCatalogRelation default, CatalogTable(
Table: `default`.`foo`
Created: Wed Feb 08 12:52:08 UTC 2017
Last Access: Wed Dec 31 23:59:59 UTC 1969
Type: MANAGED
Schema: [StructField(name,StringType,true)]
Provider: hive
Storage(Location: hdfs:/user/spark/warehouse/foo, InputFormat:
org.apache.hadoop.mapred.TextInputFormat, OutputFormat:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat))


This is a change in behavior from 2.0.2, any idea why ?

Thank you,
Daniel

Re: [Spark-SQL] Hive support is required to select over the following tables

Posted by Egor Pahomov <pa...@gmail.com>.
Just guessing here, but have you build your spark with "-Phive"? By the
way, which version of Zeppelin?

2017-02-08 5:13 GMT-08:00 Daniel Haviv <da...@gmail.com>:

> Hi,
> I'm using Spark 2.1.0 on Zeppelin.
>
> I can successfully create a table but when I try to select from it I fail:
> spark.sql("create table foo (name string)")
> res0: org.apache.spark.sql.DataFrame = []
>
> spark.sql("select * from foo")
>
> org.apache.spark.sql.AnalysisException:
> Hive support is required to select over the following tables:
> `default`.`zibi`
> ;;
> 'Project [*]
> +- 'SubqueryAlias foo
> +- 'SimpleCatalogRelation default, CatalogTable(
> Table: `default`.`foo`
> Created: Wed Feb 08 12:52:08 UTC 2017
> Last Access: Wed Dec 31 23:59:59 UTC 1969
> Type: MANAGED
> Schema: [StructField(name,StringType,true)]
> Provider: hive
> Storage(Location: hdfs:/user/spark/warehouse/foo, InputFormat:
> org.apache.hadoop.mapred.TextInputFormat, OutputFormat:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat))
>
>
> This is a change in behavior from 2.0.2, any idea why ?
>
> Thank you,
> Daniel
>
>


-- 


*Sincerely yoursEgor Pakhomov*