You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/04/27 10:08:00 UTC

[jira] [Updated] (HADOOP-15417) s3: retrieveBlock hangs when the configuration file is corrupted

     [ https://issues.apache.org/jira/browse/HADOOP-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran updated HADOOP-15417:
------------------------------------
        Summary: s3: retrieveBlock hangs when the configuration file is corrupted  (was: retrieveBlock hangs when the configuration file is corrupted)
    Description: 
The bufferSize is read from the configuration files.

When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will always be 0, making the while loop's condition always true, hanging Jets3tFileSystemStore.retrieveBlock() endlessly.

Here is the snippet of the code. 


{code:java}
  private int bufferSize;

  this.bufferSize = conf.getInt( S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);

  public File retrieveBlock(Block block, long byteRangeStart)
    throws IOException {
    File fileBlock = null;
    InputStream in = null;
    OutputStream out = null;
    try {
      fileBlock = newBackupFile();
      in = get(blockToKey(block), byteRangeStart);
      out = new BufferedOutputStream(new FileOutputStream(fileBlock));
      byte[] buf = new byte[bufferSize];
      int numRead;
      while ((numRead = in.read(buf)) >= 0) {
        out.write(buf, 0, numRead);
      }
      return fileBlock;
    } catch (IOException e) {
      ...
    } finally {
      ...
    }
  }
{code}

Similar case: [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].


  was:

The bufferSize is read from the configuration files.

When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will always be 0, making the while loop's condition always true, hanging Jets3tFileSystemStore.retrieveBlock() endlessly.

Here is the snippet of the code. 


{code:java}
  private int bufferSize;

  this.bufferSize = conf.getInt( S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);

  public File retrieveBlock(Block block, long byteRangeStart)
    throws IOException {
    File fileBlock = null;
    InputStream in = null;
    OutputStream out = null;
    try {
      fileBlock = newBackupFile();
      in = get(blockToKey(block), byteRangeStart);
      out = new BufferedOutputStream(new FileOutputStream(fileBlock));
      byte[] buf = new byte[bufferSize];
      int numRead;
      while ((numRead = in.read(buf)) >= 0) {
        out.write(buf, 0, numRead);
      }
      return fileBlock;
    } catch (IOException e) {
      ...
    } finally {
      ...
    }
  }
{code}

Similar case: [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].


    Component/s:     (was: common)
                 fs/s3

> s3: retrieveBlock hangs when the configuration file is corrupted
> ----------------------------------------------------------------
>
>                 Key: HADOOP-15417
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15417
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 0.23.0
>            Reporter: John Doe
>            Priority: Major
>
> The bufferSize is read from the configuration files.
> When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will always be 0, making the while loop's condition always true, hanging Jets3tFileSystemStore.retrieveBlock() endlessly.
> Here is the snippet of the code. 
> {code:java}
>   private int bufferSize;
>   this.bufferSize = conf.getInt( S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);
>   public File retrieveBlock(Block block, long byteRangeStart)
>     throws IOException {
>     File fileBlock = null;
>     InputStream in = null;
>     OutputStream out = null;
>     try {
>       fileBlock = newBackupFile();
>       in = get(blockToKey(block), byteRangeStart);
>       out = new BufferedOutputStream(new FileOutputStream(fileBlock));
>       byte[] buf = new byte[bufferSize];
>       int numRead;
>       while ((numRead = in.read(buf)) >= 0) {
>         out.write(buf, 0, numRead);
>       }
>       return fileBlock;
>     } catch (IOException e) {
>       ...
>     } finally {
>       ...
>     }
>   }
> {code}
> Similar case: [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org