You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by EdwardKing <zh...@neusoft.com> on 2014/01/26 07:22:21 UTC

question about hadoop dfs

I use Hadoop2.2.0 to create a master node and a sub node,like follows:

Live Datanodes : 2
Node  Transferring Address  Last Contact  Admin State  Configured Capacity (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)  
master 172.11.12.6:50010         1                In Service           16.15                              0.00              2.76                        13.39                 0.00
node1 172.11.12.7:50010         0                 In Service           16.15                             0.00               2.75                        13.40                 0.00

Then I create a abc.txt file on master 172.11.12.6
[hadoop@master ~]$ pwd
/home/hadoop
[hadoop@master ~]$ echo "This is a test." >> abc.txt
[hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
[hadoop@master ~]$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt

[hadoop@master ~]$ rm abc.txt
[hadoop@master ~]$ hadoop dfs -cat abc.txt
This is a test.

My question is:
1. Is supergroup a directory?  Where does it locate?
2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
[hadoop@master ~]$ find / -name abc.txt 
But I don't find abc.txt file. Where is the file abc.txt? After I erase it by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.

Thanks.
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
you can use the fsck command to find the block locations, here's one example

hadoop fsck /user/hadoop/graph_data.txt -blocks -locations -files



On Sun, Jan 26, 2014 at 2:48 PM, EdwardKing <zh...@neusoft.com> wrote:

> hdfs-site.xm is follows:
> <configuration>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/software/name</value>
> <description> </description>
> </property>
> <property>
> <name>dfs.namenode.secondary.http-address</name>
> <value>master:9001</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/software/data</value>
> </property>
> <property>
> <name>dfs.http.address</name>
> <value>master:9002</value>
> </property>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.datanode.du.reserved</name>
> <value>1073741824</value>
> </property>
> <property>
> <name>dfs.block.size</name>
> <value>134217728</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> </configuration>
>
> [root@master ~]# cd /home
> [root@master home]# cd software/
> [root@master software]# ls
> data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
> file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp
>
> [root@master name]# pwd
> /home/software/name
> [root@master name]# ls
> current  in_use.lock
> [root@master name]#
>
> [root@master software]# pwd
> /home/software
> [root@master software]# cd data
> [root@master data]# ls
> current  in_use.lock
>
> >> the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
> Where I can find abc.txt meadata,such as file name, file path and block
> location?  The abc.txt file data itself is in  master 172.11.12.6 or node1
> 172.11.12.7,which directory it locate?
>
> Thanks.
>
>
>
> ----- Original Message -----
> From: Jeff Zhang
> To: user@hadoop.apache.org
> Sent: Sunday, January 26, 2014 2:30 PM
> Subject: Re: question about hadoop dfs
>
>
> 1. Is supergroup a directory?  Where does it locate?
>     supergroup is user group rather than directory just like the user
> group of linux
>
>
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
>     the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
>
>
>
>
> On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:
>
> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
you can use the fsck command to find the block locations, here's one example

hadoop fsck /user/hadoop/graph_data.txt -blocks -locations -files



On Sun, Jan 26, 2014 at 2:48 PM, EdwardKing <zh...@neusoft.com> wrote:

> hdfs-site.xm is follows:
> <configuration>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/software/name</value>
> <description> </description>
> </property>
> <property>
> <name>dfs.namenode.secondary.http-address</name>
> <value>master:9001</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/software/data</value>
> </property>
> <property>
> <name>dfs.http.address</name>
> <value>master:9002</value>
> </property>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.datanode.du.reserved</name>
> <value>1073741824</value>
> </property>
> <property>
> <name>dfs.block.size</name>
> <value>134217728</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> </configuration>
>
> [root@master ~]# cd /home
> [root@master home]# cd software/
> [root@master software]# ls
> data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
> file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp
>
> [root@master name]# pwd
> /home/software/name
> [root@master name]# ls
> current  in_use.lock
> [root@master name]#
>
> [root@master software]# pwd
> /home/software
> [root@master software]# cd data
> [root@master data]# ls
> current  in_use.lock
>
> >> the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
> Where I can find abc.txt meadata,such as file name, file path and block
> location?  The abc.txt file data itself is in  master 172.11.12.6 or node1
> 172.11.12.7,which directory it locate?
>
> Thanks.
>
>
>
> ----- Original Message -----
> From: Jeff Zhang
> To: user@hadoop.apache.org
> Sent: Sunday, January 26, 2014 2:30 PM
> Subject: Re: question about hadoop dfs
>
>
> 1. Is supergroup a directory?  Where does it locate?
>     supergroup is user group rather than directory just like the user
> group of linux
>
>
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
>     the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
>
>
>
>
> On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:
>
> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
you can use the fsck command to find the block locations, here's one example

hadoop fsck /user/hadoop/graph_data.txt -blocks -locations -files



On Sun, Jan 26, 2014 at 2:48 PM, EdwardKing <zh...@neusoft.com> wrote:

> hdfs-site.xm is follows:
> <configuration>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/software/name</value>
> <description> </description>
> </property>
> <property>
> <name>dfs.namenode.secondary.http-address</name>
> <value>master:9001</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/software/data</value>
> </property>
> <property>
> <name>dfs.http.address</name>
> <value>master:9002</value>
> </property>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.datanode.du.reserved</name>
> <value>1073741824</value>
> </property>
> <property>
> <name>dfs.block.size</name>
> <value>134217728</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> </configuration>
>
> [root@master ~]# cd /home
> [root@master home]# cd software/
> [root@master software]# ls
> data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
> file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp
>
> [root@master name]# pwd
> /home/software/name
> [root@master name]# ls
> current  in_use.lock
> [root@master name]#
>
> [root@master software]# pwd
> /home/software
> [root@master software]# cd data
> [root@master data]# ls
> current  in_use.lock
>
> >> the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
> Where I can find abc.txt meadata,such as file name, file path and block
> location?  The abc.txt file data itself is in  master 172.11.12.6 or node1
> 172.11.12.7,which directory it locate?
>
> Thanks.
>
>
>
> ----- Original Message -----
> From: Jeff Zhang
> To: user@hadoop.apache.org
> Sent: Sunday, January 26, 2014 2:30 PM
> Subject: Re: question about hadoop dfs
>
>
> 1. Is supergroup a directory?  Where does it locate?
>     supergroup is user group rather than directory just like the user
> group of linux
>
>
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
>     the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
>
>
>
>
> On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:
>
> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
you can use the fsck command to find the block locations, here's one example

hadoop fsck /user/hadoop/graph_data.txt -blocks -locations -files



On Sun, Jan 26, 2014 at 2:48 PM, EdwardKing <zh...@neusoft.com> wrote:

> hdfs-site.xm is follows:
> <configuration>
> <property>
> <name>dfs.name.dir</name>
> <value>file:/home/software/name</value>
> <description> </description>
> </property>
> <property>
> <name>dfs.namenode.secondary.http-address</name>
> <value>master:9001</value>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>file:/home/software/data</value>
> </property>
> <property>
> <name>dfs.http.address</name>
> <value>master:9002</value>
> </property>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> <property>
> <name>dfs.datanode.du.reserved</name>
> <value>1073741824</value>
> </property>
> <property>
> <name>dfs.block.size</name>
> <value>134217728</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> </configuration>
>
> [root@master ~]# cd /home
> [root@master home]# cd software/
> [root@master software]# ls
> data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
> file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp
>
> [root@master name]# pwd
> /home/software/name
> [root@master name]# ls
> current  in_use.lock
> [root@master name]#
>
> [root@master software]# pwd
> /home/software
> [root@master software]# cd data
> [root@master data]# ls
> current  in_use.lock
>
> >> the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
> Where I can find abc.txt meadata,such as file name, file path and block
> location?  The abc.txt file data itself is in  master 172.11.12.6 or node1
> 172.11.12.7,which directory it locate?
>
> Thanks.
>
>
>
> ----- Original Message -----
> From: Jeff Zhang
> To: user@hadoop.apache.org
> Sent: Sunday, January 26, 2014 2:30 PM
> Subject: Re: question about hadoop dfs
>
>
> 1. Is supergroup a directory?  Where does it locate?
>     supergroup is user group rather than directory just like the user
> group of linux
>
>
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
>     the meadata(file name, file path and block location) is in master, the
> file data itself is in datanode.
>
>
>
>
>
> On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:
>
> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by EdwardKing <zh...@neusoft.com>.
hdfs-site.xm is follows:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>file:/home/software/name</value>
<description> </description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/software/data</value>
</property>
<property>
<name>dfs.http.address</name>
<value>master:9002</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.du.reserved</name>
<value>1073741824</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

[root@master ~]# cd /home
[root@master home]# cd software/
[root@master software]# ls
data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp

[root@master name]# pwd
/home/software/name
[root@master name]# ls
current  in_use.lock
[root@master name]# 

[root@master software]# pwd
/home/software
[root@master software]# cd data
[root@master data]# ls
current  in_use.lock

>> the meadata(file name, file path and block location) is in master, the file data itself is in datanode.

Where I can find abc.txt meadata,such as file name, file path and block location?  The abc.txt file data itself is in  master 172.11.12.6 or node1 172.11.12.7,which directory it locate? 

Thanks.



----- Original Message ----- 
From: Jeff Zhang 
To: user@hadoop.apache.org 
Sent: Sunday, January 26, 2014 2:30 PM
Subject: Re: question about hadoop dfs


1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group of linux


2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
    the meadata(file name, file path and block location) is in master, the file data itself is in datanode.





On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

I use Hadoop2.2.0 to create a master node and a sub node,like follows:

Live Datanodes : 2
Node  Transferring Address  Last Contact  Admin State  Configured Capacity (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
master 172.11.12.6:50010         1                In Service           16.15                              0.00              2.76                        13.39                 0.00
node1 172.11.12.7:50010         0                 In Service           16.15                             0.00               2.75                        13.40                 0.00

Then I create a abc.txt file on master 172.11.12.6
[hadoop@master ~]$ pwd
/home/hadoop
[hadoop@master ~]$ echo "This is a test." >> abc.txt
[hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
[hadoop@master ~]$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt

[hadoop@master ~]$ rm abc.txt
[hadoop@master ~]$ hadoop dfs -cat abc.txt
This is a test.

My question is:
1. Is supergroup a directory?  Where does it locate?
2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
[hadoop@master ~]$ find / -name abc.txt
But I don't find abc.txt file. Where is the file abc.txt? After I erase it by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.

Thanks.
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: question about hadoop dfs

Posted by EdwardKing <zh...@neusoft.com>.
hdfs-site.xm is follows:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>file:/home/software/name</value>
<description> </description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/software/data</value>
</property>
<property>
<name>dfs.http.address</name>
<value>master:9002</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.du.reserved</name>
<value>1073741824</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

[root@master ~]# cd /home
[root@master home]# cd software/
[root@master software]# ls
data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp

[root@master name]# pwd
/home/software/name
[root@master name]# ls
current  in_use.lock
[root@master name]# 

[root@master software]# pwd
/home/software
[root@master software]# cd data
[root@master data]# ls
current  in_use.lock

>> the meadata(file name, file path and block location) is in master, the file data itself is in datanode.

Where I can find abc.txt meadata,such as file name, file path and block location?  The abc.txt file data itself is in  master 172.11.12.6 or node1 172.11.12.7,which directory it locate? 

Thanks.



----- Original Message ----- 
From: Jeff Zhang 
To: user@hadoop.apache.org 
Sent: Sunday, January 26, 2014 2:30 PM
Subject: Re: question about hadoop dfs


1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group of linux


2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
    the meadata(file name, file path and block location) is in master, the file data itself is in datanode.





On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

I use Hadoop2.2.0 to create a master node and a sub node,like follows:

Live Datanodes : 2
Node  Transferring Address  Last Contact  Admin State  Configured Capacity (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
master 172.11.12.6:50010         1                In Service           16.15                              0.00              2.76                        13.39                 0.00
node1 172.11.12.7:50010         0                 In Service           16.15                             0.00               2.75                        13.40                 0.00

Then I create a abc.txt file on master 172.11.12.6
[hadoop@master ~]$ pwd
/home/hadoop
[hadoop@master ~]$ echo "This is a test." >> abc.txt
[hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
[hadoop@master ~]$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt

[hadoop@master ~]$ rm abc.txt
[hadoop@master ~]$ hadoop dfs -cat abc.txt
This is a test.

My question is:
1. Is supergroup a directory?  Where does it locate?
2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
[hadoop@master ~]$ find / -name abc.txt
But I don't find abc.txt file. Where is the file abc.txt? After I erase it by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.

Thanks.
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: question about hadoop dfs

Posted by EdwardKing <zh...@neusoft.com>.
hdfs-site.xm is follows:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>file:/home/software/name</value>
<description> </description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/software/data</value>
</property>
<property>
<name>dfs.http.address</name>
<value>master:9002</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.du.reserved</name>
<value>1073741824</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

[root@master ~]# cd /home
[root@master home]# cd software/
[root@master software]# ls
data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp

[root@master name]# pwd
/home/software/name
[root@master name]# ls
current  in_use.lock
[root@master name]# 

[root@master software]# pwd
/home/software
[root@master software]# cd data
[root@master data]# ls
current  in_use.lock

>> the meadata(file name, file path and block location) is in master, the file data itself is in datanode.

Where I can find abc.txt meadata,such as file name, file path and block location?  The abc.txt file data itself is in  master 172.11.12.6 or node1 172.11.12.7,which directory it locate? 

Thanks.



----- Original Message ----- 
From: Jeff Zhang 
To: user@hadoop.apache.org 
Sent: Sunday, January 26, 2014 2:30 PM
Subject: Re: question about hadoop dfs


1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group of linux


2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
    the meadata(file name, file path and block location) is in master, the file data itself is in datanode.





On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

I use Hadoop2.2.0 to create a master node and a sub node,like follows:

Live Datanodes : 2
Node  Transferring Address  Last Contact  Admin State  Configured Capacity (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
master 172.11.12.6:50010         1                In Service           16.15                              0.00              2.76                        13.39                 0.00
node1 172.11.12.7:50010         0                 In Service           16.15                             0.00               2.75                        13.40                 0.00

Then I create a abc.txt file on master 172.11.12.6
[hadoop@master ~]$ pwd
/home/hadoop
[hadoop@master ~]$ echo "This is a test." >> abc.txt
[hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
[hadoop@master ~]$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt

[hadoop@master ~]$ rm abc.txt
[hadoop@master ~]$ hadoop dfs -cat abc.txt
This is a test.

My question is:
1. Is supergroup a directory?  Where does it locate?
2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
[hadoop@master ~]$ find / -name abc.txt
But I don't find abc.txt file. Where is the file abc.txt? After I erase it by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.

Thanks.
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: question about hadoop dfs

Posted by EdwardKing <zh...@neusoft.com>.
hdfs-site.xm is follows:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>file:/home/software/name</value>
<description> </description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/software/data</value>
</property>
<property>
<name>dfs.http.address</name>
<value>master:9002</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.du.reserved</name>
<value>1073741824</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

[root@master ~]# cd /home
[root@master home]# cd software/
[root@master software]# ls
data   hadoop-2.2.0         jdk1.7.0_02                name  test.txt
file:  hadoop-2.2.0.tar.gz  jdk-7u2-linux-i586.tar.gz  temp  tmp

[root@master name]# pwd
/home/software/name
[root@master name]# ls
current  in_use.lock
[root@master name]# 

[root@master software]# pwd
/home/software
[root@master software]# cd data
[root@master data]# ls
current  in_use.lock

>> the meadata(file name, file path and block location) is in master, the file data itself is in datanode.

Where I can find abc.txt meadata,such as file name, file path and block location?  The abc.txt file data itself is in  master 172.11.12.6 or node1 172.11.12.7,which directory it locate? 

Thanks.



----- Original Message ----- 
From: Jeff Zhang 
To: user@hadoop.apache.org 
Sent: Sunday, January 26, 2014 2:30 PM
Subject: Re: question about hadoop dfs


1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group of linux


2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
    the meadata(file name, file path and block location) is in master, the file data itself is in datanode.





On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

I use Hadoop2.2.0 to create a master node and a sub node,like follows:

Live Datanodes : 2
Node  Transferring Address  Last Contact  Admin State  Configured Capacity (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
master 172.11.12.6:50010         1                In Service           16.15                              0.00              2.76                        13.39                 0.00
node1 172.11.12.7:50010         0                 In Service           16.15                             0.00               2.75                        13.40                 0.00

Then I create a abc.txt file on master 172.11.12.6
[hadoop@master ~]$ pwd
/home/hadoop
[hadoop@master ~]$ echo "This is a test." >> abc.txt
[hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
[hadoop@master ~]$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt

[hadoop@master ~]$ rm abc.txt
[hadoop@master ~]$ hadoop dfs -cat abc.txt
This is a test.

My question is:
1. Is supergroup a directory?  Where does it locate?
2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by following command:
[hadoop@master ~]$ find / -name abc.txt
But I don't find abc.txt file. Where is the file abc.txt? After I erase it by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.

Thanks.
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group
of linux

2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
following command:
    the meadata(file name, file path and block location) is in master, the
file data itself is in datanode.



On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group
of linux

2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
following command:
    the meadata(file name, file path and block location) is in master, the
file data itself is in datanode.



On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group
of linux

2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
following command:
    the meadata(file name, file path and block location) is in master, the
file data itself is in datanode.



On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: question about hadoop dfs

Posted by Jeff Zhang <je...@gopivotal.com>.
1. Is supergroup a directory?  Where does it locate?
    supergroup is user group rather than directory just like the user group
of linux

2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
following command:
    the meadata(file name, file path and block location) is in master, the
file data itself is in datanode.



On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing <zh...@neusoft.com> wrote:

> I use Hadoop2.2.0 to create a master node and a sub node,like follows:
>
> Live Datanodes : 2
> Node  Transferring Address  Last Contact  Admin State  Configured Capacity
> (GB)  Used(GB)  Non DFS Used (GB)  Remaining(GB)  Used(%)
> master 172.11.12.6:50010         1                In Service
> 16.15                              0.00              2.76
>      13.39                 0.00
> node1 172.11.12.7:50010         0                 In Service
> 16.15                             0.00               2.75
>      13.40                 0.00
>
> Then I create a abc.txt file on master 172.11.12.6
> [hadoop@master ~]$ pwd
> /home/hadoop
> [hadoop@master ~]$ echo "This is a test." >> abc.txt
> [hadoop@master ~]$ hadoop dfs -copyFromLocal test.txt
> [hadoop@master ~]$ hadoop dfs -ls
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/01/25 22:07:00 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   2 hadoop supergroup         16 2014-01-25 21:36 abc.txt
>
> [hadoop@master ~]$ rm abc.txt
> [hadoop@master ~]$ hadoop dfs -cat abc.txt
> This is a test.
>
> My question is:
> 1. Is supergroup a directory?  Where does it locate?
> 2. I search abc.txt on master 172.11.12.6 and node1 172.11.12.7 by
> following command:
> [hadoop@master ~]$ find / -name abc.txt
> But I don't find abc.txt file. Where is the file abc.txt? After I erase it
> by rm command, I still cat this file? Where is it? My OS is CentOS-5.8.
>
> Thanks.
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>