You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ted Malaska (JIRA)" <ji...@apache.org> on 2015/08/03 21:58:06 UTC

[jira] [Updated] (HBASE-14150) Add BulkLoad functionality to HBase-Spark Module

     [ https://issues.apache.org/jira/browse/HBASE-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ted Malaska updated HBASE-14150:
--------------------------------
    Attachment: HBASE-14150.1.patch

First draft of BulkLoad with Spark.

This patch includes:
1. HBaseContext Implementation
2. RDD Implicit Implementation
3. Unit Test

> Add BulkLoad functionality to HBase-Spark Module
> ------------------------------------------------
>
>                 Key: HBASE-14150
>                 URL: https://issues.apache.org/jira/browse/HBASE-14150
>             Project: HBase
>          Issue Type: New Feature
>          Components: spark
>            Reporter: Ted Malaska
>            Assignee: Ted Malaska
>         Attachments: HBASE-14150.1.patch
>
>
> Add on to the work done in HBASE-13992 to add functionality to do a bulk load from a given RDD.
> This will do the following:
> 1. figure out the number of regions and sort and partition the data correctly to be written out to HFiles
> 2. Also unlike the MR bulkload I would like that the columns to be sorted in the shuffle stage and not in the memory of the reducer.  This will allow this design to support super wide records with out going out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)