You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Song Jun (JIRA)" <ji...@apache.org> on 2017/02/14 15:10:42 UTC

[jira] [Closed] (SPARK-19484) continue work to create a table with an empty schema

     [ https://issues.apache.org/jira/browse/SPARK-19484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Song Jun closed SPARK-19484.
----------------------------
    Resolution: Won't Fix

this has been contained in https://github.com/apache/spark/pull/16787

> continue work to create a table with an empty schema
> ----------------------------------------------------
>
>                 Key: SPARK-19484
>                 URL: https://issues.apache.org/jira/browse/SPARK-19484
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Song Jun
>            Priority: Minor
>
> after SPARK-19279, we could not create a Hive table with an empty schema,
> we should tighten up the condition when create a hive table in 
> https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L835
> That is if a CatalogTable t has an empty schema, and (there is no `spark.sql.schema.numParts` or its value is 0), we should not add a default `col` schema, if we did, a table with an empty schema will be created, that is not we expected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org