You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "John Doe (JIRA)" <ji...@apache.org> on 2018/04/26 19:41:00 UTC

[jira] [Created] (HADOOP-15417) retrieveBlock hangs when the configuration file is corrupted

John Doe created HADOOP-15417:
---------------------------------

             Summary: retrieveBlock hangs when the configuration file is corrupted
                 Key: HADOOP-15417
                 URL: https://issues.apache.org/jira/browse/HADOOP-15417
             Project: Hadoop Common
          Issue Type: Bug
          Components: common
    Affects Versions: 0.23.0
            Reporter: John Doe



The bufferSize is read from the configuration files.

When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will always be 0, making the while loop's condition always true, hanging Jets3tFileSystemStore.retrieveBlock() endlessly.

Here is the snippet of the code. 


{code:java}
  private int bufferSize;

  this.bufferSize = conf.getInt( S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);

  public File retrieveBlock(Block block, long byteRangeStart)
    throws IOException {
    File fileBlock = null;
    InputStream in = null;
    OutputStream out = null;
    try {
      fileBlock = newBackupFile();
      in = get(blockToKey(block), byteRangeStart);
      out = new BufferedOutputStream(new FileOutputStream(fileBlock));
      byte[] buf = new byte[bufferSize];
      int numRead;
      while ((numRead = in.read(buf)) >= 0) {
        out.write(buf, 0, numRead);
      }
      return fileBlock;
    } catch (IOException e) {
      ...
    } finally {
      ...
    }
  }
{code}

Similar case: [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org