You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Mladen Kovacevic (JIRA)" <ji...@apache.org> on 2016/10/04 01:00:43 UTC
[jira] [Created] (KUDU-1676) Spark DDL needs elegant way to specify
range partitioning
Mladen Kovacevic created KUDU-1676:
--------------------------------------
Summary: Spark DDL needs elegant way to specify range partitioning
Key: KUDU-1676
URL: https://issues.apache.org/jira/browse/KUDU-1676
Project: Kudu
Issue Type: New Feature
Components: spark
Affects Versions: 1.0.0
Reporter: Mladen Kovacevic
To define partition column splits, you need a PartialRow object.
These are easy to create when you have the Schema object. But since your table schema in Spark is defined with StructType instead of Schema, then its cumbersome to define a new Schema object to be the exact duplicate of the StructType version, only to get the PartialRow, to set what the range partition will values would be, then set the addSplitRow() function call to your CreateTableOptions.
We need an elegant way to have the Spark API handle specifying range partition attributes without having to drop into the Java API in Spark.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)