You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2009/04/03 17:24:13 UTC
[jira] Commented: (HADOOP-5551) Namenode permits directory
destruction on overwrite
[ https://issues.apache.org/jira/browse/HADOOP-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12695428#action_12695428 ]
Hudson commented on HADOOP-5551:
--------------------------------
Integrated in Hadoop-trunk #796 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/796/])
> Namenode permits directory destruction on overwrite
> ---------------------------------------------------
>
> Key: HADOOP-5551
> URL: https://issues.apache.org/jira/browse/HADOOP-5551
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.19.1
> Reporter: Brian Bockelman
> Assignee: Brian Bockelman
> Priority: Critical
> Fix For: 0.19.2, 0.20.0
>
> Attachments: HADOOP-5551-v2.patch, HADOOP-5551-v3.patch, HADOOP-5551-v4.patch
>
>
> The FSNamesystem's startFileInternal allows overwriting of directories. That is, if you have a directory named /foo/bar and you try to write a file named /foo/bar, the file is written and the directory disappears.
> This is most apparent for folks using libhdfs directly, as overwriting is always turned on. Therefore, if libhdfs applications do not check the existence of a directory first, then they will permit new files to destroy directories.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.