You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sqoop.apache.org by "Kyrill Alyoshin (JIRA)" <ji...@apache.org> on 2013/07/08 21:39:49 UTC

[jira] [Comment Edited] (SQOOP-1125) Out of memory errors when number of records to import < 0.5 * splitSize

    [ https://issues.apache.org/jira/browse/SQOOP-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702318#comment-13702318 ] 

Kyrill Alyoshin edited comment on SQOOP-1125 at 7/8/13 7:38 PM:
----------------------------------------------------------------

Here it is guys:


org.springframework.batch.core.step.AbstractStep - Encountered an error executing the step
java.lang.OutOfMemoryError: Java heap space
        at java.math.BigDecimal.bigTenToThe(BigDecimal.java:3376)
        at java.math.BigDecimal.bigMultiplyPowerTen(BigDecimal.java:3508)
        at java.math.BigDecimal.compareMagnitude(BigDecimal.java:2603)
        at java.math.BigDecimal.compareTo(BigDecimal.java:2566)
        at org.apache.sqoop.mapreduce.db.BigDecimalSplitter.split(BigDecimalSplitter.java:138)
        at org.apache.sqoop.mapreduce.db.BigDecimalSplitter.split(BigDecimalSplitter.java:69)
        at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:167)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1033)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1050)
        at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:173)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:934)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)

                
      was (Author: kyrill007):
    Here it is guys:

{
org.springframework.batch.core.step.AbstractStep - Encountered an error executing the step
java.lang.OutOfMemoryError: Java heap space
        at java.math.BigDecimal.bigTenToThe(BigDecimal.java:3376)
        at java.math.BigDecimal.bigMultiplyPowerTen(BigDecimal.java:3508)
        at java.math.BigDecimal.compareMagnitude(BigDecimal.java:2603)
        at java.math.BigDecimal.compareTo(BigDecimal.java:2566)
        at org.apache.sqoop.mapreduce.db.BigDecimalSplitter.split(BigDecimalSplitter.java:138)
        at org.apache.sqoop.mapreduce.db.BigDecimalSplitter.split(BigDecimalSplitter.java:69)
        at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:167)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1033)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1050)
        at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:173)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:934)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:885)
}
                  
> Out of memory errors when number of records to import < 0.5 * splitSize
> -----------------------------------------------------------------------
>
>                 Key: SQOOP-1125
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1125
>             Project: Sqoop
>          Issue Type: Bug
>    Affects Versions: 1.4.3
>            Reporter: Dave Kincaid
>            Priority: Critical
>
> We are getting out of memory errors during import if the number of records to import is less than 0.5*splitSize (and is nonterminating decimal).
> For example, if the numSplits = 3, minVal = 100, maxVal = 101 then in BigDecimalSplitter.split() an extraordinary number of tiny values will be added to the splits List and run out of memory eventually.
> I also noticed that there are no tests for BigDecimalSplitter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira