You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/04/27 10:06:00 UTC
[jira] [Commented] (HADOOP-15417) retrieveBlock hangs when the
configuration file is corrupted
[ https://issues.apache.org/jira/browse/HADOOP-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456189#comment-16456189 ]
Steve Loughran commented on HADOOP-15417:
-----------------------------------------
John. I was wondering if you really meant 0.23, but then I saw that this really is a piece of S3FileSystem, which we have now deleted entirely as of HADOOP-12609 in 2016.
This going to be wontfix.
I don't think anyone is going to look at issues filed against 0.23; branch-2 and hadoop-3.1+ are where problems get replicated and fixed, then backported, at best, to hadoop-2.8, or, depending on severity, 2.7
Can you check out the latest hadoop code and see if you can write tests to replicate there?
thanks
> retrieveBlock hangs when the configuration file is corrupted
> ------------------------------------------------------------
>
> Key: HADOOP-15417
> URL: https://issues.apache.org/jira/browse/HADOOP-15417
> Project: Hadoop Common
> Issue Type: Bug
> Components: common
> Affects Versions: 0.23.0
> Reporter: John Doe
> Priority: Major
>
> The bufferSize is read from the configuration files.
> When the configuration file is corrupted, i.e.,bufferSize=0, the numRead will always be 0, making the while loop's condition always true, hanging Jets3tFileSystemStore.retrieveBlock() endlessly.
> Here is the snippet of the code.
> {code:java}
> private int bufferSize;
> this.bufferSize = conf.getInt( S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_KEY, S3FileSystemConfigKeys.S3_STREAM_BUFFER_SIZE_DEFAULT);
> public File retrieveBlock(Block block, long byteRangeStart)
> throws IOException {
> File fileBlock = null;
> InputStream in = null;
> OutputStream out = null;
> try {
> fileBlock = newBackupFile();
> in = get(blockToKey(block), byteRangeStart);
> out = new BufferedOutputStream(new FileOutputStream(fileBlock));
> byte[] buf = new byte[bufferSize];
> int numRead;
> while ((numRead = in.read(buf)) >= 0) {
> out.write(buf, 0, numRead);
> }
> return fileBlock;
> } catch (IOException e) {
> ...
> } finally {
> ...
> }
> }
> {code}
> Similar case: [Hadoop-15415|https://issues.apache.org/jira/browse/HADOOP-15415].
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org