You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2008/07/07 15:31:32 UTC
[jira] Commented: (HADOOP-3592)
org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the
destination can't be opened
[ https://issues.apache.org/jira/browse/HADOOP-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12611159#action_12611159 ]
Steve Loughran commented on HADOOP-3592:
----------------------------------------
Bill, I see you are cleaning up when something fails
+ InputStream in = null;
+ try {
+ in = srcFS.open(src);
+ IOUtils.copyBytes(in, new FileOutputStream(dst), conf);
+ } catch (IOException e) {
+ IOUtils.closeStream(in);
+ }
but shouldnt the exception be rethrown?
> org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-3592
> URL: https://issues.apache.org/jira/browse/HADOOP-3592
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.18.0
> Reporter: Steve Loughran
> Priority: Minor
> Attachments: HADOOP-3592.patch
>
>
> FileUtil.copy() relies on IOUtils.copyBytes() to close the incoming streams, which it does. Normally.
> But if dstFS.create() raises any kind of IOException, then the inputstream "in", which was created in the line above, will never get closed, and hence be leaked.
> InputStream in = srcFS.open(src);
> OutputStream out = dstFS.create(dst, overwrite);
> IOUtils.copyBytes(in, out, conf, true);
> Some try/catch wrapper around the open operations could close the streams if any exception gets thrown at that point in the copy process.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.