You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "David Sabater (JIRA)" <ji...@apache.org> on 2015/07/24 18:31:04 UTC
[jira] [Comment Edited] (SPARK-6898) Special chars in column names
is broken
[ https://issues.apache.org/jira/browse/SPARK-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640704#comment-14640704 ]
David Sabater edited comment on SPARK-6898 at 7/24/15 4:30 PM:
---------------------------------------------------------------
I hope I am missing something here but I am facing this issue in my current cluster (Currently running compiled version of Spark 1.4.0 with Hive support)
(I tried with 1.4.0 and 1.4.1 binaries locally and had same issue).
{noformat}
sqlContext.jsonRDD(sc.makeRDD("""{"a": {"c.b": 1}, "b.$q": [{"a@!.q": 1}], "q.w": {"w.i&": [1]}}""" :: Nil)).registerTempTable("t")
sqlContext.sql("SELECT `key?number1`, `key.number2` FROM records")
sqlContext.sql("SELECT a.`c.b`, `b.$q`[0].`a@!.q`, `q.w`.`w.i&`[0] FROM t")
Either select is throwing the same error:
scala> sqlContext.sql("SELECT a.`c.b`, `b.$q`[0].`a@!.q`, `q.w`.`w.i&`[0] FROM t")
15/07/24 17:23:12 INFO ParseDriver: Parsing command: SELECT a.`c.b`, `b.$q`[0].`a@!.q`, `q.w`.`w.i&`[0] FROM t
15/07/24 17:23:12 INFO ParseDriver: Parse Completed
org.apache.spark.sql.AnalysisException: cannot resolve 'b.$q' given input columns a, b.$q, q.w; line 1 pos 16
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.
scala:63)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.
scala:52)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:286)
{noformat}
was (Author: dsdinter):
I hope I am missing something here but I am facing this issue in my current cluster (Currently running compiled version of Spark 1.4.0 with Hive support)
(I tried with 1.4.0 and 1.4.1 binaries locally and had same issue).
sqlContext.jsonRDD(sc.makeRDD("""{"a": {"c.b": 1}, "b.$q": [{"a@!.q": 1}], "q.w": {"w.i&": [1]}}""" :: Nil)).registerTempTable("t")
sqlContext.sql("SELECT `key?number1`, `key.number2` FROM records")
sqlContext.sql("SELECT a.`c.b`, `b.$q`[0].`a@!.q`, `q.w`.`w.i&`[0] FROM t")
Either select is throwing the same error:
scala> sqlContext.sql("SELECT a.`c.b`, `b.$q`[0].`a@!.q`, `q.w`.`w.i&`[0] FROM t")
15/07/24 17:23:12 INFO ParseDriver: Parsing command: SELECT a.`c.b`, `b.$q`[0].`a@!.q`, `q.w`.`w.i&`[0] FROM t
15/07/24 17:23:12 INFO ParseDriver: Parse Completed
org.apache.spark.sql.AnalysisException: cannot resolve 'b.$q' given input columns a, b.$q, q.w; line 1 pos 16
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.
scala:63)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.
scala:52)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:286)
> Special chars in column names is broken
> ---------------------------------------
>
> Key: SPARK-6898
> URL: https://issues.apache.org/jira/browse/SPARK-6898
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Reporter: Wenchen Fan
> Assignee: Wenchen Fan
> Fix For: 1.4.0
>
>
> This function is added a long time ago, but it's not complete, it will fail if we have "." inside column name.
> {code}
> test("SPARK-3483 Special chars in column names") {
> val data = sparkContext.parallelize(
> Seq("""{"key?number1": "value1", "key.number2": "value2"}"""))
> jsonRDD(data).registerTempTable("records")
> sql("SELECT `key?number1`, `key.number2` FROM records")
> }
> {code}
> this test will fail.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org