You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2008/09/18 23:14:44 UTC
[jira] Updated: (HADOOP-3592) org.apache.hadoop.fs.FileUtil.copy()
will leak input streams if the destination can't be opened
[ https://issues.apache.org/jira/browse/HADOOP-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Raghu Angadi updated HADOOP-3592:
---------------------------------
Status: Patch Available (was: Reopened)
> org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-3592
> URL: https://issues.apache.org/jira/browse/HADOOP-3592
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.19.0
> Reporter: Steve Loughran
> Assignee: Bill de hOra
> Priority: Minor
> Fix For: 0.19.0
>
> Attachments: HADOOP-3592-200807022209.patch, HADOOP-3592.patch, HADOOP-3592.patch, HADOOP-3592.patch
>
>
> FileUtil.copy() relies on IOUtils.copyBytes() to close the incoming streams, which it does. Normally.
> But if dstFS.create() raises any kind of IOException, then the inputstream "in", which was created in the line above, will never get closed, and hence be leaked.
> InputStream in = srcFS.open(src);
> OutputStream out = dstFS.create(dst, overwrite);
> IOUtils.copyBytes(in, out, conf, true);
> Some try/catch wrapper around the open operations could close the streams if any exception gets thrown at that point in the copy process.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.