You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Caesar Samsi <ca...@mac.com> on 2015/05/26 22:59:51 UTC

How to test DFS?

Hello,

 

How would I go about and confirm that a file has been distributed
successfully to all datanodes?

 

I would like to demonstrate this capability in a short briefing for my
colleagues.

 

Can I access the file from the datanode itself (todate I can only access the
files from the master node, not the slaves)?

 

Thank you, Caesar.


Re: How to test DFS?

Posted by Drake민영근 <dr...@nexr.com>.
Hi,

You can use 'hdfs fsck' command for determining block locations. Sample run
shows below:

[root@qa-b1 ~]# hdfs fsck /tmp/jack -files -blocks -locations
Connecting to namenode via http://192.168.50.171:50070
FSCK started by root (auth:SIMPLE) from /192.168.50.170 for path /tmp/jack
at Wed May 27 14:51:56 KST 2015
/tmp/jack 517472256 bytes, 4 block(s):  OK
0. BP-1171919055-192.168.50.171-1431320286009:blk_1073742878_2054
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
1. BP-1171919055-192.168.50.171-1431320286009:blk_1073742879_2055
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
2. BP-1171919055-192.168.50.171-1431320286009:blk_1073742880_2056
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
3. BP-1171919055-192.168.50.171-1431320286009:blk_1073742881_2057
len=114819072 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]

file "/tmp/jack" is split by four blocks. Block 0 is replicated 3 node,
192.168.50.174, 192.168.50.172, 192.168.50.173

Thanks.

Drake 민영근 Ph.D
kt NexR

On Wed, May 27, 2015 at 8:58 AM, jay vyas <ja...@gmail.com>
wrote:

>  you could just list the file contents in your hadoop data/ directories,
> of the individual nodes, ...
> somewhere in there the file blocks will be floating around.
>
> On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:
>
>> Hello,
>>
>>
>>
>> How would I go about and confirm that a file has been distributed
>> successfully to all datanodes?
>>
>>
>>
>> I would like to demonstrate this capability in a short briefing for my
>> colleagues.
>>
>>
>>
>> Can I access the file from the datanode itself (todate I can only access
>> the files from the master node, not the slaves)?
>>
>>
>>
>> Thank you, Caesar.
>>
>
>
>
> --
> jay vyas
>

Re: How to test DFS?

Posted by Drake민영근 <dr...@nexr.com>.
Hi,

You can use 'hdfs fsck' command for determining block locations. Sample run
shows below:

[root@qa-b1 ~]# hdfs fsck /tmp/jack -files -blocks -locations
Connecting to namenode via http://192.168.50.171:50070
FSCK started by root (auth:SIMPLE) from /192.168.50.170 for path /tmp/jack
at Wed May 27 14:51:56 KST 2015
/tmp/jack 517472256 bytes, 4 block(s):  OK
0. BP-1171919055-192.168.50.171-1431320286009:blk_1073742878_2054
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
1. BP-1171919055-192.168.50.171-1431320286009:blk_1073742879_2055
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
2. BP-1171919055-192.168.50.171-1431320286009:blk_1073742880_2056
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
3. BP-1171919055-192.168.50.171-1431320286009:blk_1073742881_2057
len=114819072 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]

file "/tmp/jack" is split by four blocks. Block 0 is replicated 3 node,
192.168.50.174, 192.168.50.172, 192.168.50.173

Thanks.

Drake 민영근 Ph.D
kt NexR

On Wed, May 27, 2015 at 8:58 AM, jay vyas <ja...@gmail.com>
wrote:

>  you could just list the file contents in your hadoop data/ directories,
> of the individual nodes, ...
> somewhere in there the file blocks will be floating around.
>
> On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:
>
>> Hello,
>>
>>
>>
>> How would I go about and confirm that a file has been distributed
>> successfully to all datanodes?
>>
>>
>>
>> I would like to demonstrate this capability in a short briefing for my
>> colleagues.
>>
>>
>>
>> Can I access the file from the datanode itself (todate I can only access
>> the files from the master node, not the slaves)?
>>
>>
>>
>> Thank you, Caesar.
>>
>
>
>
> --
> jay vyas
>

Re: How to test DFS?

Posted by Drake민영근 <dr...@nexr.com>.
Hi,

You can use 'hdfs fsck' command for determining block locations. Sample run
shows below:

[root@qa-b1 ~]# hdfs fsck /tmp/jack -files -blocks -locations
Connecting to namenode via http://192.168.50.171:50070
FSCK started by root (auth:SIMPLE) from /192.168.50.170 for path /tmp/jack
at Wed May 27 14:51:56 KST 2015
/tmp/jack 517472256 bytes, 4 block(s):  OK
0. BP-1171919055-192.168.50.171-1431320286009:blk_1073742878_2054
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
1. BP-1171919055-192.168.50.171-1431320286009:blk_1073742879_2055
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
2. BP-1171919055-192.168.50.171-1431320286009:blk_1073742880_2056
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
3. BP-1171919055-192.168.50.171-1431320286009:blk_1073742881_2057
len=114819072 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]

file "/tmp/jack" is split by four blocks. Block 0 is replicated 3 node,
192.168.50.174, 192.168.50.172, 192.168.50.173

Thanks.

Drake 민영근 Ph.D
kt NexR

On Wed, May 27, 2015 at 8:58 AM, jay vyas <ja...@gmail.com>
wrote:

>  you could just list the file contents in your hadoop data/ directories,
> of the individual nodes, ...
> somewhere in there the file blocks will be floating around.
>
> On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:
>
>> Hello,
>>
>>
>>
>> How would I go about and confirm that a file has been distributed
>> successfully to all datanodes?
>>
>>
>>
>> I would like to demonstrate this capability in a short briefing for my
>> colleagues.
>>
>>
>>
>> Can I access the file from the datanode itself (todate I can only access
>> the files from the master node, not the slaves)?
>>
>>
>>
>> Thank you, Caesar.
>>
>
>
>
> --
> jay vyas
>

Re: How to test DFS?

Posted by Drake민영근 <dr...@nexr.com>.
Hi,

You can use 'hdfs fsck' command for determining block locations. Sample run
shows below:

[root@qa-b1 ~]# hdfs fsck /tmp/jack -files -blocks -locations
Connecting to namenode via http://192.168.50.171:50070
FSCK started by root (auth:SIMPLE) from /192.168.50.170 for path /tmp/jack
at Wed May 27 14:51:56 KST 2015
/tmp/jack 517472256 bytes, 4 block(s):  OK
0. BP-1171919055-192.168.50.171-1431320286009:blk_1073742878_2054
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
1. BP-1171919055-192.168.50.171-1431320286009:blk_1073742879_2055
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
2. BP-1171919055-192.168.50.171-1431320286009:blk_1073742880_2056
len=134217728 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]
3. BP-1171919055-192.168.50.171-1431320286009:blk_1073742881_2057
len=114819072 repl=3 [192.168.50.174:50010, 192.168.50.172:50010,
192.168.50.173:50010]

file "/tmp/jack" is split by four blocks. Block 0 is replicated 3 node,
192.168.50.174, 192.168.50.172, 192.168.50.173

Thanks.

Drake 민영근 Ph.D
kt NexR

On Wed, May 27, 2015 at 8:58 AM, jay vyas <ja...@gmail.com>
wrote:

>  you could just list the file contents in your hadoop data/ directories,
> of the individual nodes, ...
> somewhere in there the file blocks will be floating around.
>
> On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:
>
>> Hello,
>>
>>
>>
>> How would I go about and confirm that a file has been distributed
>> successfully to all datanodes?
>>
>>
>>
>> I would like to demonstrate this capability in a short briefing for my
>> colleagues.
>>
>>
>>
>> Can I access the file from the datanode itself (todate I can only access
>> the files from the master node, not the slaves)?
>>
>>
>>
>> Thank you, Caesar.
>>
>
>
>
> --
> jay vyas
>

Re: How to test DFS?

Posted by jay vyas <ja...@gmail.com>.
 you could just list the file contents in your hadoop data/ directories,
of the individual nodes, ...
somewhere in there the file blocks will be floating around.

On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:

> Hello,
>
>
>
> How would I go about and confirm that a file has been distributed
> successfully to all datanodes?
>
>
>
> I would like to demonstrate this capability in a short briefing for my
> colleagues.
>
>
>
> Can I access the file from the datanode itself (todate I can only access
> the files from the master node, not the slaves)?
>
>
>
> Thank you, Caesar.
>



-- 
jay vyas

Re: How to test DFS?

Posted by jay vyas <ja...@gmail.com>.
 you could just list the file contents in your hadoop data/ directories,
of the individual nodes, ...
somewhere in there the file blocks will be floating around.

On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:

> Hello,
>
>
>
> How would I go about and confirm that a file has been distributed
> successfully to all datanodes?
>
>
>
> I would like to demonstrate this capability in a short briefing for my
> colleagues.
>
>
>
> Can I access the file from the datanode itself (todate I can only access
> the files from the master node, not the slaves)?
>
>
>
> Thank you, Caesar.
>



-- 
jay vyas

Re: How to test DFS?

Posted by jay vyas <ja...@gmail.com>.
 you could just list the file contents in your hadoop data/ directories,
of the individual nodes, ...
somewhere in there the file blocks will be floating around.

On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:

> Hello,
>
>
>
> How would I go about and confirm that a file has been distributed
> successfully to all datanodes?
>
>
>
> I would like to demonstrate this capability in a short briefing for my
> colleagues.
>
>
>
> Can I access the file from the datanode itself (todate I can only access
> the files from the master node, not the slaves)?
>
>
>
> Thank you, Caesar.
>



-- 
jay vyas

Re: How to test DFS?

Posted by jay vyas <ja...@gmail.com>.
 you could just list the file contents in your hadoop data/ directories,
of the individual nodes, ...
somewhere in there the file blocks will be floating around.

On Tue, May 26, 2015 at 4:59 PM, Caesar Samsi <ca...@mac.com> wrote:

> Hello,
>
>
>
> How would I go about and confirm that a file has been distributed
> successfully to all datanodes?
>
>
>
> I would like to demonstrate this capability in a short briefing for my
> colleagues.
>
>
>
> Can I access the file from the datanode itself (todate I can only access
> the files from the master node, not the slaves)?
>
>
>
> Thank you, Caesar.
>



-- 
jay vyas