You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by "yeshavora (JIRA)" <ji...@apache.org> on 2013/08/21 01:26:53 UTC

[jira] [Created] (YARN-1086) reducer of sort job restarts from scratch in between after RM restart

yeshavora created YARN-1086:
-------------------------------

             Summary: reducer of sort job restarts from scratch in between after RM restart
                 Key: YARN-1086
                 URL: https://issues.apache.org/jira/browse/YARN-1086
             Project: Hadoop YARN
          Issue Type: Bug
            Reporter: yeshavora
            Priority: Blocker


Steps Followed:
1) Run a sort job. As soon as it finishes all the map tasks. [100% map], restart resource manager.

2) Analyse the progress of the sort job.
It starts with 100% map 0% reduce
100% map 32% reduce
100% map 0% reduce
Reducer stays at 30% reduce for around 5-10 minutes. and again start reducer from scratch.

Log from failed reducer attempt:

Error: java.io.IOException: Error while reading compressed data at org.apache.hadoop.io.IOUtils.wrappedReadForCompressedData(IOUtils.java:174) at org.apache.hadoop.mapred.IFile$Reader.readData(IFile.java:383) at org.apache.hadoop.mapred.IFile$Reader.nextRawValue(IFile.java:444) at org.apache.hadoop.mapred.Merger$Segment.nextRawValue(Merger.java:327) at org.apache.hadoop.mapred.Merger$Segment.getValue(Merger.java:309) at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:533) at org.apache.hadoop.mapred.ReduceTask$4.next(ReduceTask.java:619) at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:154) at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121) at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:297) at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:645) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:405) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157) Caused by: org.apache.hadoop.fs.FSError: java.io.IOException: Input/output error at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.read(RawLocalFileSystem.java:177) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.mapred.IFileInputStream.doRead(IFileInputStream.java:209) at org.apache.hadoop.mapred.IFileInputStream.read(IFileInputStream.java:152) at org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:127) at org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:98) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) at org.apache.hadoop.io.IOUtils.wrappedReadForCompressedData(IOUtils.java:170) ... 17 more Caused by: java.io.IOException: Input/output error at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:220) at org.apache.hadoop.fs.RawLocalFileSystem$TrackingFileInputStream.read(RawLocalFileSystem.java:110) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.read(RawLocalFileSystem.java:171) ... 26 more


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira