You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Sergey Soldatov (JIRA)" <ji...@apache.org> on 2016/02/02 23:47:39 UTC

[jira] [Created] (PHOENIX-2649) GC/OOM during BulkLoad

Sergey Soldatov created PHOENIX-2649:
----------------------------------------

             Summary: GC/OOM during BulkLoad
                 Key: PHOENIX-2649
                 URL: https://issues.apache.org/jira/browse/PHOENIX-2649
             Project: Phoenix
          Issue Type: Bug
    Affects Versions: 4.7.0
         Environment: Mac OS, Hadoop 2.7.2, HBase 1.1.2
            Reporter: Sergey Soldatov
            Priority: Critical


Phoenix fails to complete  bulk load of 40Mb csv data with GC heap error during Reduce phase. The problem is in the comparator for TableRowkeyPair. It expects that the serialized value was written using zero-compressed encoding, but at least in my case it was written in regular way. So, trying to obtain length for table name and row key it always get zero and reports that those byte arrays are equal. As the result, the reducer receives all data produced by mappers in one reduce call and fails with OOM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)