You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Nick Dimiduk (JIRA)" <ji...@apache.org> on 2013/12/03 00:44:36 UTC

[jira] [Updated] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

     [ https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Nick Dimiduk updated HBASE-9931:
--------------------------------

    Attachment: HBASE-9931.01.patch

Yes, you're right. Take 2.

> Optional setBatch for CopyTable to copy large rows in batches
> -------------------------------------------------------------
>
>                 Key: HBASE-9931
>                 URL: https://issues.apache.org/jira/browse/HBASE-9931
>             Project: HBase
>          Issue Type: Improvement
>          Components: mapreduce
>            Reporter: Dave Latham
>         Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch
>
>
> We've had CopyTable jobs fail because a small number of rows are wide enough to not fit into memory.  If we could specify the batch size for CopyTable scans that shoud be able to break those large rows up into multiple iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)