You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Kyungyong Lee <ia...@gmail.com> on 2012/05/20 21:07:38 UTC

a question about controlling roles (read, write) of data node

Hello,

I would like to ask if I can do the following. Assuming that I have a
datanode, i.e., dn1, which already contains some useful blocks. Here,
I do not want to save new data blocks to the datanode, but I still
want to use the blocks that already exist in the datanode (dn1).
I considered to use exclude file (dfs.hosts.exclude). However, if I
add "dn1" to the exclude file list, I cannot use blocks that are
already contained in dn1. If it is right, can you please give me some
guidances to do what I'm thinking using HDFS?

Thanks,

Re: a question about controlling roles (read, write) of data node

Posted by Kyungyong Lee <ia...@gmail.com>.
Thank you very much for the answer, Harsh J. Your suggestion totally
makes sense, and I can do what I wanted :)

Best,

On Sun, May 20, 2012 at 10:53 PM, Harsh J <ha...@cloudera.com> wrote:
> Kyungyong Lee,
>
> One way: This may be possible to do if you inflate the
> "dfs.datanode.du.reserved" property on the specific DataNode to a very
> large bytes value (> maximum volume size). This way your NN will still
> consider the DN as a valid one that carries readable blocks, but when
> writing files this DN will never be selected due to its
> false-lack-of-space report.
>
> On Mon, May 21, 2012 at 12:37 AM, Kyungyong Lee <ia...@gmail.com> wrote:
>> Hello,
>>
>> I would like to ask if I can do the following. Assuming that I have a
>> datanode, i.e., dn1, which already contains some useful blocks. Here,
>> I do not want to save new data blocks to the datanode, but I still
>> want to use the blocks that already exist in the datanode (dn1).
>> I considered to use exclude file (dfs.hosts.exclude). However, if I
>> add "dn1" to the exclude file list, I cannot use blocks that are
>> already contained in dn1. If it is right, can you please give me some
>> guidances to do what I'm thinking using HDFS?
>>
>> Thanks,
>
>
>
> --
> Harsh J

Re: a question about controlling roles (read, write) of data node

Posted by Harsh J <ha...@cloudera.com>.
Kyungyong Lee,

One way: This may be possible to do if you inflate the
"dfs.datanode.du.reserved" property on the specific DataNode to a very
large bytes value (> maximum volume size). This way your NN will still
consider the DN as a valid one that carries readable blocks, but when
writing files this DN will never be selected due to its
false-lack-of-space report.

On Mon, May 21, 2012 at 12:37 AM, Kyungyong Lee <ia...@gmail.com> wrote:
> Hello,
>
> I would like to ask if I can do the following. Assuming that I have a
> datanode, i.e., dn1, which already contains some useful blocks. Here,
> I do not want to save new data blocks to the datanode, but I still
> want to use the blocks that already exist in the datanode (dn1).
> I considered to use exclude file (dfs.hosts.exclude). However, if I
> add "dn1" to the exclude file list, I cannot use blocks that are
> already contained in dn1. If it is right, can you please give me some
> guidances to do what I'm thinking using HDFS?
>
> Thanks,



-- 
Harsh J