You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ryan Blue (JIRA)" <ji...@apache.org> on 2016/04/07 18:44:25 UTC

[jira] [Created] (SPARK-14459) SQL partitioning must match existing tables, but is not checked.

Ryan Blue created SPARK-14459:
---------------------------------

             Summary: SQL partitioning must match existing tables, but is not checked.
                 Key: SPARK-14459
                 URL: https://issues.apache.org/jira/browse/SPARK-14459
             Project: Spark
          Issue Type: Bug
          Components: SQL
            Reporter: Ryan Blue


Writing into partitioned Hive tables has unexpected results because the table's partitioning is not detected and applied during the analysis phase. 

For example, if I have two tables, {{source]} and {{partitioned}}, with the same column types:

{code}
CREATE TABLE source (id bigint, data string, part string);
CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY (part string);

// copy from source to partitioned
sqlContext.table("source").write.insertInto("partitioned")
{code}

Copying from {{source}} to {{partitioned}} succeeds, but results in 0 rows. This works if I explicitly partition by adding {{...write.partitionBy("part").insertInto(...)}}. This work-around isn't obvious and is prone to error because the {{partitionBy}} must match the table's partitioning, though it is not checked.

I think when relations are resolved, the partitioning should be checked and updated if it isn't set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org