You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Nick Dimiduk (JIRA)" <ji...@apache.org> on 2014/05/04 22:44:18 UTC

[jira] [Commented] (HBASE-11111) Bulk load of very wide rows can go OOM

    [ https://issues.apache.org/jira/browse/HBASE-11111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989134#comment-13989134 ] 

Nick Dimiduk commented on HBASE-11111:
--------------------------------------

Where does the failure happen? Is it in generating HFiles or in LoadIncrementalHFiles? I files HBASE-7743 some time back for addressing the former. A failure in the latter is news to me!

> Bulk load of very wide rows can go OOM
> --------------------------------------
>
>                 Key: HBASE-11111
>                 URL: https://issues.apache.org/jira/browse/HBASE-11111
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Jean-Marc Spaggiari
>
> When doing bulk load of very large rows (2M columns), application will stop with OOME.
> We should have an option to use the local disk as a temporary storage place to store sorted rows, with warning the user about performances degradation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)