You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Joydeep Sen Sarma (JIRA)" <ji...@apache.org> on 2008/01/20 20:22:35 UTC

[jira] Updated: (HADOOP-2670) doDF frequently brings task down due to lack of memory

     [ https://issues.apache.org/jira/browse/HADOOP-2670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joydeep Sen Sarma updated HADOOP-2670:
--------------------------------------

    Summary: doDF frequently brings task down due to lack of memory  (was: doDF frequently brings process down due to lack of memory)

> doDF frequently brings task down due to lack of memory
> ------------------------------------------------------
>
>                 Key: HADOOP-2670
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2670
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.4
>            Reporter: Joydeep Sen Sarma
>
> we are running with -Xmx 1024M. Every once in a while, we see tasks failing because of:
> java.io.IOException: java.io.IOException: Cannot allocate memory
> 	at java.lang.UNIXProcess.(UNIXProcess.java:148)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:451)
> 	at java.lang.Runtime.exec(Runtime.java:591)
> 	at java.lang.Runtime.exec(Runtime.java:464)
> 	at org.apache.hadoop.fs.DF.doDF(DF.java:60)
> 	at org.apache.hadoop.fs.DF.getAvailable(DF.java:99)
> 	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:259)
> 	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:289)
> 	at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:155)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.newBackupFile(DFSClient.java:1475)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.openBackupStream(DFSClient.java:1442)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:1600)
> 	at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:140)
> 	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:122)
> 	at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:112)
> 	at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:39)
> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
> 	at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:822)
> 	at org.apache.hadoop.mapred.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:69)
> 	at org.apache.hadoop.mapred.ReduceTask$2.collect(ReduceTask.java:304)
> 	at com.facebook.hive.streaming.HiveJoin$JoinReduce.reduce(HiveJoin.java:546)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:322)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1743)
> when the task re-runs - it succeeds. it seems like that this is an edge case where the garbage collector needs to be run before trying to spawn external process. (going to try it out). any other ideas?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.