You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "John Doe (JIRA)" <ji...@apache.org> on 2018/04/27 17:29:00 UTC

[jira] [Created] (MAPREDUCE-7088) DistributedFSCheckMapper.doIO hangs with user misconfigured inputs

John Doe created MAPREDUCE-7088:
-----------------------------------

             Summary: DistributedFSCheckMapper.doIO hangs with user misconfigured inputs
                 Key: MAPREDUCE-7088
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7088
             Project: Hadoop Map/Reduce
          Issue Type: Bug
          Components: test
    Affects Versions: 2.5.0
            Reporter: John Doe


When a user configures the bufferSize to be 0, the for loop in DistributedFSCheck$DistributedFSCheckMapper.doIO function hangs endlessly. Here is the code snippet.
{code:java}
    int bufferSize = DEFAULT_BUFFER_SIZE;

    for(int i = 0; i < args.length; i++) { // parse command line
     ...
     else if (args[i].equals("-bufferSize")) {
        bufferSize = Integer.parseInt(args[++i]);
      }
     ...
    }
    public Object doIO(Reporter reporter, String name,  long offset) throws IOException {
      // open file
      FSDataInputStream in = null;
      Path p = new Path(name);
      try {
        in = fs.open(p);
      } catch(IOException e) {
        return name + "@(missing)";
      }
      in.seek(offset);
      long actualSize = 0;
      try {
        long blockSize = fs.getDefaultBlockSize(p);
        reporter.setStatus("reading " + name + "@" + offset + "/" + blockSize);
        for( int curSize = bufferSize; 
             curSize == bufferSize && actualSize < blockSize;
             actualSize += curSize) {
          curSize = in.read(buffer, 0, bufferSize);
        }
      } catch(IOException e) {
        ...
      } finally {
        in.close();
      }
      return new Long(actualSize);
    }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org