You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Jarek Jarcec Cecho (JIRA)" <ji...@apache.org> on 2013/07/08 21:33:50 UTC
[jira] [Commented] (SQOOP-1125) Out of memory errors when number of
records to import < 0.5 * splitSize
[ https://issues.apache.org/jira/browse/SQOOP-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702312#comment-13702312 ]
Jarek Jarcec Cecho commented on SQOOP-1125:
-------------------------------------------
Hi [~dkincaid],
thank you very much for filing this JIRA! Would you mind also sharing the {{OutOfMemory}} exception stack trace that you are getting?
> Out of memory errors when number of records to import < 0.5 * splitSize
> -----------------------------------------------------------------------
>
> Key: SQOOP-1125
> URL: https://issues.apache.org/jira/browse/SQOOP-1125
> Project: Sqoop
> Issue Type: Bug
> Affects Versions: 1.4.3
> Reporter: Dave Kincaid
> Priority: Critical
>
> We are getting out of memory errors during import if the number of records to import is less than 0.5*splitSize (and is nonterminating decimal).
> For example, if the numSplits = 3, minVal = 100, maxVal = 101 then in BigDecimalSplitter.split() an extraordinary number of tiny values will be added to the splits List and run out of memory eventually.
> I also noticed that there are no tests for BigDecimalSplitter.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira