You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Mladen Kovacevic (JIRA)" <ji...@apache.org> on 2016/10/04 01:06:20 UTC

[jira] [Assigned] (KUDU-1676) Spark DDL needs elegant way to specify range partitioning

     [ https://issues.apache.org/jira/browse/KUDU-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mladen Kovacevic reassigned KUDU-1676:
--------------------------------------

    Assignee: Mladen Kovacevic

> Spark DDL needs elegant way to specify range partitioning
> ---------------------------------------------------------
>
>                 Key: KUDU-1676
>                 URL: https://issues.apache.org/jira/browse/KUDU-1676
>             Project: Kudu
>          Issue Type: New Feature
>          Components: spark
>    Affects Versions: 1.0.0
>            Reporter: Mladen Kovacevic
>            Assignee: Mladen Kovacevic
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> To define partition column splits, you need a PartialRow object. 
> These are easy to create when you have the Schema object. But since your table schema in Spark is defined with StructType instead of Schema, then its cumbersome to define a new Schema object to be the exact duplicate of the StructType version, only to get the PartialRow, to set what the range partition will values would be, then set the addSplitRow() function call to your CreateTableOptions.
> We need an elegant way to have the Spark API handle specifying range partition attributes without having to drop into the Java API in Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)