You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Ting Dai (JIRA)" <ji...@apache.org> on 2017/10/24 21:20:01 UTC

[jira] [Updated] (MAPREDUCE-6990) FileInputStream.skip function can return 0 when the file is corrupted, causing an infinite loop

     [ https://issues.apache.org/jira/browse/MAPREDUCE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ting Dai updated MAPREDUCE-6990:
--------------------------------
    Description: 
When file is corrupted, for example, bad encoding, [Yarn-2724](https://issues.apache.org/jira/browse/YARN-2724), the FileInputStream can return 0, causing the while loop in TaskLog$Reader become infinite.

{code:java}
    public Reader(TaskAttemptID taskid, LogName kind,  long start, long end, boolean isCleanup) throws IOException {
      // find the right log file
      LogFileDetail fileDetail = getLogFileDetail(taskid, kind, isCleanup);
      // calculate the start and stop
      long size = fileDetail.length;
      if (start < 0) {
        start += size + 1;
      }
      if (end < 0) {
        end += size + 1;
      }
      start = Math.max(0, Math.min(start, size));
      end = Math.max(0, Math.min(end, size));
      start += fileDetail.start;
      end += fileDetail.start;
      bytesRemaining = end - start;
      String owner = obtainLogDirOwner(taskid);
      file = SecureIOUtils.openForRead(new File(fileDetail.location, kind.toString()),  owner, null);
      // skip upto start
      long pos = 0;
      while (pos < start) {
        long result = file.skip(start - pos);
        if (result < 0) {
          bytesRemaining = 0;
          break;
        }
        pos += result;
      }
    }
{code}

Similar bugs are [Hadoop-8614](https://issues.apache.org/jira/browse/HADOOP-8614), [Yarn-2905](https://issues.apache.org/jira/browse/YARN-2905)

  was:
When file is corrupted, for example, bad encoding, [Yarn-2724](https://issues.apache.org/jira/browse/YARN-2724), the FileInputStream can return 0, causing the while loop in TaskLog$Reader become infinite.

{code:java}
// Some comments here
    public Reader(TaskAttemptID taskid, LogName kind, 
                  long start, long end, boolean isCleanup) throws IOException {
      // find the right log file
      LogFileDetail fileDetail = getLogFileDetail(taskid, kind, isCleanup);
      // calculate the start and stop
      long size = fileDetail.length;
      if (start < 0) {
        start += size + 1;
      }
      if (end < 0) {
        end += size + 1;
      }
      start = Math.max(0, Math.min(start, size));
      end = Math.max(0, Math.min(end, size));
      start += fileDetail.start;
      end += fileDetail.start;
      bytesRemaining = end - start;
      String owner = obtainLogDirOwner(taskid);
      file = SecureIOUtils.openForRead(new File(fileDetail.location, kind.toString()), 
          owner, null);
      // skip upto start
      long pos = 0;
      while (pos < start) {
        long result = file.skip(start - pos);
        if (result < 0) {
          bytesRemaining = 0;
          break;
        }
        pos += result;
      }
    }
{code}

Similar bugs are [Hadoop-8614](https://issues.apache.org/jira/browse/HADOOP-8614), [Yarn-2905](https://issues.apache.org/jira/browse/YARN-2905)


> FileInputStream.skip function can return 0 when the file is corrupted, causing an infinite loop
> -----------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6990
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6990
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 0.23.0
>            Reporter: Ting Dai
>
> When file is corrupted, for example, bad encoding, [Yarn-2724](https://issues.apache.org/jira/browse/YARN-2724), the FileInputStream can return 0, causing the while loop in TaskLog$Reader become infinite.
> {code:java}
>     public Reader(TaskAttemptID taskid, LogName kind,  long start, long end, boolean isCleanup) throws IOException {
>       // find the right log file
>       LogFileDetail fileDetail = getLogFileDetail(taskid, kind, isCleanup);
>       // calculate the start and stop
>       long size = fileDetail.length;
>       if (start < 0) {
>         start += size + 1;
>       }
>       if (end < 0) {
>         end += size + 1;
>       }
>       start = Math.max(0, Math.min(start, size));
>       end = Math.max(0, Math.min(end, size));
>       start += fileDetail.start;
>       end += fileDetail.start;
>       bytesRemaining = end - start;
>       String owner = obtainLogDirOwner(taskid);
>       file = SecureIOUtils.openForRead(new File(fileDetail.location, kind.toString()),  owner, null);
>       // skip upto start
>       long pos = 0;
>       while (pos < start) {
>         long result = file.skip(start - pos);
>         if (result < 0) {
>           bytesRemaining = 0;
>           break;
>         }
>         pos += result;
>       }
>     }
> {code}
> Similar bugs are [Hadoop-8614](https://issues.apache.org/jira/browse/HADOOP-8614), [Yarn-2905](https://issues.apache.org/jira/browse/YARN-2905)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org