You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2018/07/03 08:38:00 UTC

[jira] [Comment Edited] (SPARK-24681) Cannot create a view from a table when a nested column name contains ':'

    [ https://issues.apache.org/jira/browse/SPARK-24681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16531016#comment-16531016 ] 

Takeshi Yamamuro edited comment on SPARK-24681 at 7/3/18 8:37 AM:
------------------------------------------------------------------

I've looked over related code andI think we cannot use `:` in Hive metastore column names: [https://github.com/apache/hive/blob/release-1.2.1/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoUtils.java#L239]

The current master checks if column names don't include comma in column names only (you fixed this a year ago): [https://github.com/apache/spark/blob/a7c8f0c8cb144a026ea21e8780107e363ceacb8d/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala#L141]
IMHO we need to check ':' and ';' here, too. WDYT?

Or, we need to accept ':' in column names?

 


was (Author: maropu):
I've looked over related code andI think we cannot use `:` in Hive metastore column names: https://github.com/apache/hive/blob/release-1.2.1/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoUtils.java#L239

The current master checks if column names don't include comma in column names only (you fixed this a year ago): https://github.com/apache/spark/blob/a7c8f0c8cb144a026ea21e8780107e363ceacb8d/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala#L141
So, IMHO we need to check ':' and ';' here, too. WDYT? Or, we need to accept ':' in column names?


 

> Cannot create a view from a table when a nested column name contains ':'
> ------------------------------------------------------------------------
>
>                 Key: SPARK-24681
>                 URL: https://issues.apache.org/jira/browse/SPARK-24681
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0, 2.3.0, 2.4.0
>            Reporter: Adrian Ionescu
>            Priority: Major
>
> Here's a patch that reproduces the issue: 
> {code:java}
> diff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveParquetSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveParquetSuite.scala 
> index 09c1547..29bb3db 100644 
> --- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveParquetSuite.scala 
> +++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveParquetSuite.scala 
> @@ -19,6 +19,7 @@ package org.apache.spark.sql.hive 
>  
> import org.apache.spark.sql.{QueryTest, Row} 
> import org.apache.spark.sql.execution.datasources.parquet.ParquetTest 
> +import org.apache.spark.sql.functions.{lit, struct} 
> import org.apache.spark.sql.hive.test.TestHiveSingleton 
>  
> case class Cases(lower: String, UPPER: String) 
> @@ -76,4 +77,21 @@ class HiveParquetSuite extends QueryTest with ParquetTest with TestHiveSingleton 
>       } 
>     } 
>   } 
> + 
> +  test("column names including ':' characters") { 
> +    withTempPath { path => 
> +      withTable("test_table") { 
> +        spark.range(0) 
> +          .select(struct(lit(0).as("nested:column")).as("toplevel:column")) 
> +          .write.format("parquet") 
> +          .option("path", path.getCanonicalPath) 
> +          .saveAsTable("test_table") 
> + 
> +        sql("CREATE VIEW test_view_1 AS SELECT `toplevel:column`.* FROM test_table") 
> +        sql("CREATE VIEW test_view_2 AS SELECT * FROM test_table") 
> + 
> +      } 
> +    } 
> +  } 
> }{code}
> The first "CREATE VIEW" statement succeeds, but the second one fails with:
> {code:java}
> org.apache.spark.SparkException: Cannot recognize hive type string: struct<nested:column:int>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org