You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by bzheng <bi...@gmail.com> on 2009/03/12 03:48:41 UTC

What happens when you do a ctrl-c on a big dfs -rmr

I did a ctrl-c immediately after issuing a hadoop dfs -rmr command.  The rmr
target is no longer visible from the dfs -ls command.  The number of files
deleted is huge and I don't think it can possibly delete them all between
the time the command is issued and ctrl-c.  Does this mean it leaves behind
unreachable files on the slave nodes and making them dead weights?  We can
always reformat hdfs to be sure.  But is there a way to check?  Thanks.
-- 
View this message in context: http://www.nabble.com/What-happens-when-you-do-a-ctrl-c-on-a-big-dfs--rmr-tp22468909p22468909.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: What happens when you do a ctrl-c on a big dfs -rmr

Posted by lohit <lo...@yahoo.com>.
When you issue -rmr with directory, namenode get a directory name and starts deleting files recursively. It adds the blocks belonging to files to invalidate list. NameNode then deletes those blocks lazily. So, yes it will issue command to datanodes to delete those blocks, just give it some time. You do not need to reformat HDFS.
Lohit



----- Original Message ----
From: bzheng <bi...@gmail.com>
To: core-user@hadoop.apache.org
Sent: Wednesday, March 11, 2009 7:48:41 PM
Subject: What happens when you do a ctrl-c on a big dfs -rmr


I did a ctrl-c immediately after issuing a hadoop dfs -rmr command.  The rmr
target is no longer visible from the dfs -ls command.  The number of files
deleted is huge and I don't think it can possibly delete them all between
the time the command is issued and ctrl-c.  Does this mean it leaves behind
unreachable files on the slave nodes and making them dead weights?  We can
always reformat hdfs to be sure.  But is there a way to check?  Thanks.
-- 
View this message in context: http://www.nabble.com/What-happens-when-you-do-a-ctrl-c-on-a-big-dfs--rmr-tp22468909p22468909.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Re: What happens when you do a ctrl-c on a big dfs -rmr

Posted by 何 永强 <he...@software.ict.ac.cn>.
All deleted or not a file deleted at all, that depending on how fast you
press the ctrl-c. The delete command is not executed in your terminal,
instead the rmr command is sent to the hadoop namenode and is executed
there.  


On 09-3-12 上午10:48, "bzheng" <bi...@gmail.com> wrote:

> 
> I did a ctrl-c immediately after issuing a hadoop dfs -rmr command.  The rmr
> target is no longer visible from the dfs -ls command.  The number of files
> deleted is huge and I don't think it can possibly delete them all between
> the time the command is issued and ctrl-c.  Does this mean it leaves behind
> unreachable files on the slave nodes and making them dead weights?  We can
> always reformat hdfs to be sure.  But is there a way to check?  Thanks.