You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Krish Donald <go...@gmail.com> on 2015/02/26 19:31:16 UTC

Backup of individual component of Hadoop ecosystem

Hi,

As per my understanding we don't take backup of Hadoop cluster as the size
is very large generally .

However in case if somebody has dropped a table by mistake then how should
we recover the data ?

How to take backup of Hadoop ecosystem individual component.

Thanks
Krish

Re: Backup of individual component of Hadoop ecosystem

Posted by Artem Ervits <ar...@gmail.com>.
There are several approaches. I would check hdfs trash folder of the user
deleting a file. Expiration of items in trash is controlled by
fs.trash.interval property on core-site.xml.
Artem Ervits
On Feb 26, 2015 1:31 PM, "Krish Donald" <go...@gmail.com> wrote:

> Hi,
>
> As per my understanding we don't take backup of Hadoop cluster as the size
> is very large generally .
>
> However in case if somebody has dropped a table by mistake then how should
> we recover the data ?
>
> How to take backup of Hadoop ecosystem individual component.
>
> Thanks
> Krish
>

Re: Backup of individual component of Hadoop ecosystem

Posted by Artem Ervits <ar...@gmail.com>.
There are several approaches. I would check hdfs trash folder of the user
deleting a file. Expiration of items in trash is controlled by
fs.trash.interval property on core-site.xml.
Artem Ervits
On Feb 26, 2015 1:31 PM, "Krish Donald" <go...@gmail.com> wrote:

> Hi,
>
> As per my understanding we don't take backup of Hadoop cluster as the size
> is very large generally .
>
> However in case if somebody has dropped a table by mistake then how should
> we recover the data ?
>
> How to take backup of Hadoop ecosystem individual component.
>
> Thanks
> Krish
>

Re: Backup of individual component of Hadoop ecosystem

Posted by Artem Ervits <ar...@gmail.com>.
There are several approaches. I would check hdfs trash folder of the user
deleting a file. Expiration of items in trash is controlled by
fs.trash.interval property on core-site.xml.
Artem Ervits
On Feb 26, 2015 1:31 PM, "Krish Donald" <go...@gmail.com> wrote:

> Hi,
>
> As per my understanding we don't take backup of Hadoop cluster as the size
> is very large generally .
>
> However in case if somebody has dropped a table by mistake then how should
> we recover the data ?
>
> How to take backup of Hadoop ecosystem individual component.
>
> Thanks
> Krish
>

Re: Backup of individual component of Hadoop ecosystem

Posted by Artem Ervits <ar...@gmail.com>.
There are several approaches. I would check hdfs trash folder of the user
deleting a file. Expiration of items in trash is controlled by
fs.trash.interval property on core-site.xml.
Artem Ervits
On Feb 26, 2015 1:31 PM, "Krish Donald" <go...@gmail.com> wrote:

> Hi,
>
> As per my understanding we don't take backup of Hadoop cluster as the size
> is very large generally .
>
> However in case if somebody has dropped a table by mistake then how should
> we recover the data ?
>
> How to take backup of Hadoop ecosystem individual component.
>
> Thanks
> Krish
>