You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Sai Karthik Ganguru (JIRA)" <ji...@apache.org> on 2014/11/04 01:04:34 UTC

[jira] [Comment Edited] (SQOOP-1125) Out of memory errors when number of records to import < 0.5 * splitSize

    [ https://issues.apache.org/jira/browse/SQOOP-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14195415#comment-14195415 ] 

Sai Karthik Ganguru edited comment on SQOOP-1125 at 11/4/14 12:04 AM:
----------------------------------------------------------------------

[~jarcec] I have merged both the patches and hopefully the above issues got resolved as well.


was (Author: saikarthik):
[~jarcec] I have merged both the patches and hopefully the above issues got resolved.

> Out of memory errors when number of records to import < 0.5 * splitSize
> -----------------------------------------------------------------------
>
>                 Key: SQOOP-1125
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1125
>             Project: Sqoop
>          Issue Type: Bug
>    Affects Versions: 1.4.3
>            Reporter: Dave Kincaid
>            Assignee: Sai Karthik Ganguru
>            Priority: Critical
>              Labels: newbie
>         Attachments: sqoop-1125.patch
>
>
> We are getting out of memory errors during import if the number of records to import is less than 0.5*splitSize (and is nonterminating decimal).
> For example, if the numSplits = 3, minVal = 100, maxVal = 101 then in BigDecimalSplitter.split() an extraordinary number of tiny values will be added to the splits List and run out of memory eventually.
> I also noticed that there are no tests for BigDecimalSplitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)