You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/31 01:33:39 UTC

[jira] [Resolved] (MAPREDUCE-2018) TeraSort example fails in trunk

     [ https://issues.apache.org/jira/browse/MAPREDUCE-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer resolved MAPREDUCE-2018.
-----------------------------------------

    Resolution: Fixed

Who uses terasort?

Oh. Right.

Closing as fixed.

> TeraSort example fails in trunk
> -------------------------------
>
>                 Key: MAPREDUCE-2018
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2018
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: examples
>    Affects Versions: 0.22.0
>         Environment: Compile, build  and run from trunk terasort example using several random files as input. Terasort will fail 
>            Reporter: Krishna Ramachandran
>         Attachments: mapred-2018.patch
>
>
> Exceptions are thrown while computing splits near the end of file - typically when the number of bytes read is smaller than RECORD_LENGTH
> 10/08/17 22:44:17 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
> 10/08/17 22:44:17 INFO input.FileInputFormat: Total input paths to process : 1
> Spent 19ms computing base-splits.
> Spent 2ms computing TeraScheduler splits.
> Computing input splits took 22ms
> Sampling 1 splits of 1
> Got an exception while reading splits java.io.EOFException: read past eof
>         at org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.nextKeyValue(TeraInputFormat.java:267)
>         at org.apache.hadoop.examples.terasort.TeraInputFormat$1.run(TeraInputFormat.java:181)
> TeraInoutFormat I believe assumes the file sizes are exact multiples of RECORD_LENGTH



--
This message was sent by Atlassian JIRA
(v6.2#6252)