You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Akira AJISAKA (JIRA)" <ji...@apache.org> on 2015/11/14 05:10:11 UTC
[jira] [Updated] (HADOOP-12374) Description of hdfs expunge command
is confusing
[ https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Akira AJISAKA updated HADOOP-12374:
-----------------------------------
Fix Version/s: 2.8.0
> Description of hdfs expunge command is confusing
> ------------------------------------------------
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
> Issue Type: Bug
> Components: documentation, trash
> Affects Versions: 2.7.0, 2.7.1
> Reporter: Weiwei Yang
> Assignee: Weiwei Yang
> Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on the Trash feature.
> this description is confusing. It gives user the impression that this command will empty trash, but actually it only removes old checkpoints. If user sets a pretty long value for fs.trash.interval, this command will not remove anything until checkpoints exist longer than this value.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)