You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "maghamravikiran (JIRA)" <ji...@apache.org> on 2016/02/03 23:21:39 UTC

[jira] [Assigned] (PHOENIX-2649) GC/OOM during BulkLoad

     [ https://issues.apache.org/jira/browse/PHOENIX-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

maghamravikiran reassigned PHOENIX-2649:
----------------------------------------

    Assignee: maghamravikiran

> GC/OOM during BulkLoad
> ----------------------
>
>                 Key: PHOENIX-2649
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2649
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>         Environment: Mac OS, Hadoop 2.7.2, HBase 1.1.2
>            Reporter: Sergey Soldatov
>            Assignee: maghamravikiran
>            Priority: Critical
>         Attachments: PHOENIX-2649-1.patch, PHOENIX-2649.patch
>
>
> Phoenix fails to complete  bulk load of 40Mb csv data with GC heap error during Reduce phase. The problem is in the comparator for TableRowkeyPair. It expects that the serialized value was written using zero-compressed encoding, but at least in my case it was written in regular way. So, trying to obtain length for table name and row key it always get zero and reports that those byte arrays are equal. As the result, the reducer receives all data produced by mappers in one reduce call and fails with OOM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)