You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Stas Oskin <st...@gmail.com> on 2009/10/13 12:40:18 UTC

Re: Delete replicated blocks?

Hi.

Any idea about having replication value at 2?

Was this fixed in the patches for 0.18.3, and if yes, which patch is this?

Thanks.

On Thu, Aug 27, 2009 at 8:18 PM, Stas Oskin <st...@gmail.com> wrote:

> Hi.
>
> Following on this issue, any idea if all the bugs were worked out in 0.20,
> with replication value of 2?
>
> I remember 0.18.3 had some issues with this, and actually caused a lost of
> data to some uni.
>
> Regards.
>
> 2009/8/27 Alex Loddengaard <al...@cloudera.com>
>
> I don't know for sure, but running the rebalancer might do this for you.
>>
>> <
>>
>> http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html#Rebalancer
>> >
>>
>> Alex
>>
>> On Thu, Aug 27, 2009 at 9:18 AM, Michael Thomas <thomas@hep.caltech.edu
>> >wrote:
>>
>> > dfs.replication is only used by the client at the time the files are
>> > written.  Changing this setting will not automatically change the
>> > replication level on existing files.  To do that, you need to use the
>> > hadoop cli:
>> >
>> > hadoop fs -setrep -R 1 /
>> >
>> > --Mike
>> >
>> >
>> > Vladimir Klimontovich wrote:
>> > > This will happen automatically.
>> > > On Aug 27, 2009, at 6:04 PM, Andy Liu wrote:
>> > >
>> > >> I'm running a test Hadoop cluster, which had a dfs.replication value
>> > >> of 3.
>> > >> I'm now running out of disk space, so I've reduced dfs.replication to
>> > >> 1 and
>> > >> restarted my datanodes.  Is there a way to free up the
>> over-replicated
>> > >> blocks, or does this happen automatically at some point?
>> > >>
>> > >> Thanks,
>> > >> Andy
>> > >
>> > > ---
>> > > Vladimir Klimontovich,
>> > > skype: klimontovich
>> > > GoogleTalk/Jabber: klimontovich@gmail.com
>> > > Cell phone: +7926 890 2349
>> > >
>> >
>> >
>>
>