You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Babulal (JIRA)" <ji...@apache.org> on 2015/09/28 10:17:08 UTC
[jira] [Updated] (SPARK-10754) table and column name are case
sensitive when json Dataframe was registered as tempTable using
JavaSparkContext.
[ https://issues.apache.org/jira/browse/SPARK-10754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Babulal updated SPARK-10754:
----------------------------
Description:
Create a dataframe using json data source
SparkConf conf=new SparkConf().setMaster("spark://xyz:7077")).setAppName("Spark Tabble");
JavaSparkContext javacontext=new JavaSparkContext(conf);
SQLContext sqlContext=new SQLContext(javacontext);
DataFrame df = sqlContext.jsonFile("/user/root/examples/src/main/resources/people.json");
df.registerTempTable("sparktable");
Run the Query
sqlContext.sql("select * from sparktable").show() // this will PASs
sqlContext.sql("select * from sparkTable").show() /// This will FAIL
java.lang.RuntimeException: Table Not Found: sparkTable
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:115)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:115)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:58)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:115)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:233)
was:
Create a dataframe using json data source
SparkConf conf=new SparkConf().setMaster("spark://xyz:7077")).setAppName("Spark Tabble");
JavaSparkContext javacontext=new JavaSparkContext(conf);
SQLContext sqlContext=new SQLContext(javacontext);
DataFrame df = sqlContext.jsonFile("/user/root/examples/src/main/resources/people.json");
df.registerTempTable("sparktable");
Run the Query
sqlContext.sql("select * from sparktable").show() // this will PASs
sqlContext.sql("select * from sparkTable").show() /// This will FAIL
java.lang.RuntimeException: Table Not Found: sparkTable
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:115)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:115)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:58)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:115)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:233)
Note :- Job is triggered from spark submit
Same thing work with scala Spark Context.
> table and column name are case sensitive when json Dataframe was registered as tempTable using JavaSparkContext.
> -----------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-10754
> URL: https://issues.apache.org/jira/browse/SPARK-10754
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.3.0, 1.3.1, 1.4.1
> Environment: Linux ,Hadoop Version 1.3
> Reporter: Babulal
>
> Create a dataframe using json data source
> SparkConf conf=new SparkConf().setMaster("spark://xyz:7077")).setAppName("Spark Tabble");
> JavaSparkContext javacontext=new JavaSparkContext(conf);
> SQLContext sqlContext=new SQLContext(javacontext);
>
> DataFrame df = sqlContext.jsonFile("/user/root/examples/src/main/resources/people.json");
>
> df.registerTempTable("sparktable");
>
> Run the Query
>
> sqlContext.sql("select * from sparktable").show() // this will PASs
>
>
> sqlContext.sql("select * from sparkTable").show() /// This will FAIL
>
> java.lang.RuntimeException: Table Not Found: sparkTable
> at scala.sys.package$.error(package.scala:27)
> at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:115)
> at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:115)
> at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
> at scala.collection.AbstractMap.getOrElse(Map.scala:58)
> at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:115)
> at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:233)
>
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org