You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Oh Seok Keun <oh...@gmail.com> on 2013/09/11 11:37:58 UTC

Can you help me to install HDFS Federation and test?

Hello~ I am Rho working in korea

I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
test
After 2.1.0 ( for federation  test ) install I have a big trouble when file
putting test


I command to hadoop
Can you help me to install HDFS Federation and test?
./bin/hadoop fs -put test.txt /NN1/

there is error message
"put: Renames across FileSystems not supported"

But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok

Why this is happen? This is very sad to me ^^
Can you explain why this is happend and give me solution?


Additionally

Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
to own Namespace( named NN2 )
When making directory in namenode1 server
./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
/NN2/nn1_org   is error

Error message is "/NN2/nn1_org': No such file or directory"

I think this is very right

But in namenode2 server
./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
/NN2/nn2_org is error
Error message is "mkdir: `/NN2/nn2_org': No such file or directory"

I think when making directory in NN1 is error and making directory in NN2
is ok

Why this is happen and can you give solution?

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
Except "hadoop.tmp.dir" I am not defined any thing in core-site.xmlCan you please let me what exactly I should include in core-site.xml
Thanks,Sandeep.

Date: Sat, 21 Sep 2013 23:46:47 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

sryy for late reply just checked my mail today are you using client side mount table just as mentioned in the doc which u reffered  if u r using client side mount table configurations in u r core-site.xml u wont be able to create directory in that case first create folder without client side-mountable configurations then once folders are created u can again include client side-mountable configurations  and restart namenode,datanode and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com> wrote:




No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test

When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,
Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?

For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?


From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 

2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there


5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:






Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.


Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?




Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com



To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:




Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test



After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message




"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^




Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )




When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"





I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"





I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
Except "hadoop.tmp.dir" I am not defined any thing in core-site.xmlCan you please let me what exactly I should include in core-site.xml
Thanks,Sandeep.

Date: Sat, 21 Sep 2013 23:46:47 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

sryy for late reply just checked my mail today are you using client side mount table just as mentioned in the doc which u reffered  if u r using client side mount table configurations in u r core-site.xml u wont be able to create directory in that case first create folder without client side-mountable configurations then once folders are created u can again include client side-mountable configurations  and restart namenode,datanode and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com> wrote:




No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test

When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,
Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?

For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?


From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 

2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there


5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:






Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.


Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?




Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com



To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:




Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test



After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message




"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^




Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )




When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"





I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"





I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
Except "hadoop.tmp.dir" I am not defined any thing in core-site.xmlCan you please let me what exactly I should include in core-site.xml
Thanks,Sandeep.

Date: Sat, 21 Sep 2013 23:46:47 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

sryy for late reply just checked my mail today are you using client side mount table just as mentioned in the doc which u reffered  if u r using client side mount table configurations in u r core-site.xml u wont be able to create directory in that case first create folder without client side-mountable configurations then once folders are created u can again include client side-mountable configurations  and restart namenode,datanode and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com> wrote:




No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test

When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,
Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?

For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?


From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 

2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there


5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:






Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.


Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?




Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com



To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:




Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test



After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message




"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^




Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )




When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"





I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"





I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
Except "hadoop.tmp.dir" I am not defined any thing in core-site.xmlCan you please let me what exactly I should include in core-site.xml
Thanks,Sandeep.

Date: Sat, 21 Sep 2013 23:46:47 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

sryy for late reply just checked my mail today are you using client side mount table just as mentioned in the doc which u reffered  if u r using client side mount table configurations in u r core-site.xml u wont be able to create directory in that case first create folder without client side-mountable configurations then once folders are created u can again include client side-mountable configurations  and restart namenode,datanode and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com> wrote:




No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test

When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,
Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?

For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?


From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 

2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there


5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:






Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.


Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?




Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com



To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:




Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test



After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message




"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^




Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )




When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"





I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"





I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

 		 	   		  

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
sryy for late reply just checked my mail today are you using client side
mount table just as mentioned in the doc which u reffered  if u r using
client side mount table configurations in u r core-site.xml u wont be able
to create directory in that case first create folder without client
side-mountable configurations then once folders are created u can again
include client side-mountable configurations  and restart namenode,datanode
and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com>wrote:

> No its not appearing from other name node.
>
> Here is the procedure I followed:
> In NameNode1 I ran following commands
> bin/hdfs dfs -mkdir test
> bin/hdfs dfs -put dummy.txt test
>
> When ran bin/hdfs -ls test command from NameNode1 its listing file fin
> hdfs but if I ran same command from NameNode2 out put is "ls: test : No
> such file or directory"
>
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 18 Sep 2013 16:58:50 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> It shud be visible from every namenode machine have you tried this commmand
>
>  bin/hdfs dfs -ls /yourdirectoryname/
>
>
> On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
sryy for late reply just checked my mail today are you using client side
mount table just as mentioned in the doc which u reffered  if u r using
client side mount table configurations in u r core-site.xml u wont be able
to create directory in that case first create folder without client
side-mountable configurations then once folders are created u can again
include client side-mountable configurations  and restart namenode,datanode
and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com>wrote:

> No its not appearing from other name node.
>
> Here is the procedure I followed:
> In NameNode1 I ran following commands
> bin/hdfs dfs -mkdir test
> bin/hdfs dfs -put dummy.txt test
>
> When ran bin/hdfs -ls test command from NameNode1 its listing file fin
> hdfs but if I ran same command from NameNode2 out put is "ls: test : No
> such file or directory"
>
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 18 Sep 2013 16:58:50 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> It shud be visible from every namenode machine have you tried this commmand
>
>  bin/hdfs dfs -ls /yourdirectoryname/
>
>
> On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
sryy for late reply just checked my mail today are you using client side
mount table just as mentioned in the doc which u reffered  if u r using
client side mount table configurations in u r core-site.xml u wont be able
to create directory in that case first create folder without client
side-mountable configurations then once folders are created u can again
include client side-mountable configurations  and restart namenode,datanode
and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com>wrote:

> No its not appearing from other name node.
>
> Here is the procedure I followed:
> In NameNode1 I ran following commands
> bin/hdfs dfs -mkdir test
> bin/hdfs dfs -put dummy.txt test
>
> When ran bin/hdfs -ls test command from NameNode1 its listing file fin
> hdfs but if I ran same command from NameNode2 out put is "ls: test : No
> such file or directory"
>
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 18 Sep 2013 16:58:50 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> It shud be visible from every namenode machine have you tried this commmand
>
>  bin/hdfs dfs -ls /yourdirectoryname/
>
>
> On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
sryy for late reply just checked my mail today are you using client side
mount table just as mentioned in the doc which u reffered  if u r using
client side mount table configurations in u r core-site.xml u wont be able
to create directory in that case first create folder without client
side-mountable configurations then once folders are created u can again
include client side-mountable configurations  and restart namenode,datanode
and all daemons..by the way which version u r trying to install


On Thu, Sep 19, 2013 at 12:00 PM, Sandeep L <sa...@outlook.com>wrote:

> No its not appearing from other name node.
>
> Here is the procedure I followed:
> In NameNode1 I ran following commands
> bin/hdfs dfs -mkdir test
> bin/hdfs dfs -put dummy.txt test
>
> When ran bin/hdfs -ls test command from NameNode1 its listing file fin
> hdfs but if I ran same command from NameNode2 out put is "ls: test : No
> such file or directory"
>
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 18 Sep 2013 16:58:50 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> It shud be visible from every namenode machine have you tried this commmand
>
>  bin/hdfs dfs -ls /yourdirectoryname/
>
>
> On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>
>

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test
When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/

On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?
For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?

From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 
2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there

5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:





Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.

Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?



Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com


To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:



Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test


After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message



"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^



Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )



When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"




I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"




I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test
When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/

On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?
For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?

From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 
2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there

5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:





Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.

Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?



Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com


To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:



Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test


After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message



"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^



Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )



When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"




I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"




I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test
When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/

On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?
For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?

From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 
2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there

5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:





Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.

Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?



Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com


To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:



Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test


After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message



"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^



Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )



When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"




I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"




I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
No its not appearing from other name node.
Here is the procedure I followed:In NameNode1 I ran following commandsbin/hdfs dfs -mkdir testbin/hdfs dfs -put dummy.txt test
When ran bin/hdfs -ls test command from NameNode1 its listing file fin hdfs but if I ran same command from NameNode2 out put is "ls: test : No such file or directory"

Thanks,Sandeep.

Date: Wed, 18 Sep 2013 16:58:50 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

It shud be visible from every namenode machine have you tried this commmand
 bin/hdfs dfs -ls /yourdirectoryname/

On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?
For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?

From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 
2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there

5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:





Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.

Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?



Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com


To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:



Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test


After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message



"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^



Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )



When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"




I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"




I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

 		 	   		  

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
It shud be visible from every namenode machine have you tried this commmand

 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
It shud be visible from every namenode machine have you tried this commmand

 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
It shud be visible from every namenode machine have you tried this commmand

 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
It shud be visible from every namenode machine have you tried this commmand

 bin/hdfs dfs -ls /yourdirectoryname/


On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I resolved the issue.
> There is some problem with /etc/hosts file.
>
> One more question I would like to ask is:
>
> I created a directory in HDFS of NameNode1 and copied a file into it. My
> question is did it visible when I ran *hadoop fs -ls <PathToDirectory> *from
> NameNode2 machine?
> For me its not visible, can you explain with bit detailed.
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Tue, 17 Sep 2013 17:56:00 +0530
>
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
> 1.> make sure to check hadoop logs once u start u r datanode at
> /home/hadoop/hadoop-version(your)/logs
> 2.> make sure all the datanodes are mentioned in slaves file and slaves
> file is placed on all machines
> 3.> check which datanode is not available check log file of that machine
> are both the  machines able to do a passwordless
> ssh with each other
> 4.> check your etc/hosts file make sure all your node machines ip is
> mentioned there
> 5.> make sure you have datanode folder created as mentioned in config
> file......
>
> let me know if u have any problem......
>
>
> On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:
>
> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>
>

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there
5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.
Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?


Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:


Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test

After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message


"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^


Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )


When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"



I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"



I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there
5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.
Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?


Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:


Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test

After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message


"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^


Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )


When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"



I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"



I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there
5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.
Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?


Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:


Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test

After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message


"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^


Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )


When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"



I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"



I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I resolved the issue.There is some problem with /etc/hosts file.
One more question I would like to ask is:
I created a directory in HDFS of NameNode1 and copied a file into it. My question is did it visible when I ran hadoop fs -ls <PathToDirectory> from NameNode2 machine?For me its not visible, can you explain with bit detailed.

Thanks,Sandeep.

Date: Tue, 17 Sep 2013 17:56:00 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

1.> make sure to check hadoop logs once u start u r datanode at /home/hadoop/hadoop-version(your)/logs 2.> make sure all the datanodes are mentioned in slaves file and slaves file is placed on all machines
3.> check which datanode is not available check log file of that machine are both the  machines able to do a passwordlessssh with each other4.> check your etc/hosts file make sure all your node machines ip is mentioned there
5.> make sure you have datanode folder created as mentioned in config file......
let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com> wrote:




Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.
Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?


Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com

To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:


Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test

After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message


"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^


Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )


When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"



I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"



I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

 		 	   		  

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
1.> make sure to check hadoop logs once u start u r datanode at
/home/hadoop/hadoop-version(your)/logs
2.> make sure all the datanodes are mentioned in slaves file and slaves
file is placed on all machines
3.> check which datanode is not available check log file of that machine
are both the  machines able to do a passwordless
ssh with each other
4.> check your etc/hosts file make sure all your node machines ip is
mentioned there
5.> make sure you have datanode folder created as mentioned in config
file......

let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
1.> make sure to check hadoop logs once u start u r datanode at
/home/hadoop/hadoop-version(your)/logs
2.> make sure all the datanodes are mentioned in slaves file and slaves
file is placed on all machines
3.> check which datanode is not available check log file of that machine
are both the  machines able to do a passwordless
ssh with each other
4.> check your etc/hosts file make sure all your node machines ip is
mentioned there
5.> make sure you have datanode folder created as mentioned in config
file......

let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
1.> make sure to check hadoop logs once u start u r datanode at
/home/hadoop/hadoop-version(your)/logs
2.> make sure all the datanodes are mentioned in slaves file and slaves
file is placed on all machines
3.> check which datanode is not available check log file of that machine
are both the  machines able to do a passwordless
ssh with each other
4.> check your etc/hosts file make sure all your node machines ip is
mentioned there
5.> make sure you have datanode folder created as mentioned in config
file......

let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
1.> make sure to check hadoop logs once u start u r datanode at
/home/hadoop/hadoop-version(your)/logs
2.> make sure all the datanodes are mentioned in slaves file and slaves
file is placed on all machines
3.> check which datanode is not available check log file of that machine
are both the  machines able to do a passwordless
ssh with each other
4.> check your etc/hosts file make sure all your node machines ip is
mentioned there
5.> make sure you have datanode folder created as mentioned in config
file......

let me know if u have any problem......


On Tue, Sep 17, 2013 at 2:44 PM, Sandeep L <sa...@outlook.com>wrote:

> Hi,
>
> I tried to install HDFS federation with the help of document given by you.
>
> I have small issue.
> I used 2 slave in setup, both will act as namenode and datanode.
> Now the issue is when I am looking at home pages of both namenodes only
> one datanode is appearing.
> As per my understanding 2 datanodes should appear in both namenodes home
> pages.
>
> Can you please let me if am missing any thing?
>
> Thanks,
> Sandeep.
>
>
> ------------------------------
> Date: Wed, 11 Sep 2013 15:34:38 +0530
> Subject: Re: Can you help me to install HDFS Federation and test?
> From: visioner.sadak@gmail.com
> To: user@hadoop.apache.org
>
>
> may be this can help you ....
>
>
> On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:
>
> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>
>
>

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?

Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test
After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message

"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^

Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )

When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"


I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"


I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?

Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test
After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message

"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^

Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )

When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"


I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"


I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?

Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test
After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message

"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^

Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )

When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"


I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"


I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

RE: Can you help me to install HDFS Federation and test?

Posted by Sandeep L <sa...@outlook.com>.
Hi,
I tried to install HDFS federation with the help of document given by you.
I have small issue.I used 2 slave in setup, both will act as namenode and datanode.Now the issue is when I am looking at home pages of both namenodes only one datanode is appearing.As per my understanding 2 datanodes should appear in both namenodes home pages.
Can you please let me if am missing any thing?

Thanks,Sandeep.

Date: Wed, 11 Sep 2013 15:34:38 +0530
Subject: Re: Can you help me to install HDFS Federation and test?
From: visioner.sadak@gmail.com
To: user@hadoop.apache.org

may be this can help you ....

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

Hello~ I am Rho working in korea
I am trying to install HDFS Federation( with 2.1.0 beta version ) and to test
After 2.1.0 ( for federation  test ) install I have a big trouble when file putting test


I command to hadoopCan you help me to install HDFS Federation and test?./bin/hadoop fs -put test.txt /NN1/
there is error message

"put: Renames across FileSystems not supported"
But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
Why this is happen? This is very sad to me ^^

Can you explain why this is happend and give me solution?

Additionally
Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access to own Namespace( named NN2 )

When making directory in namenode1 server ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn1_org   is error
Error message is "/NN2/nn1_org': No such file or directory"


I think this is very right
But in namenode2 server./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir /NN2/nn2_org is errorError message is "mkdir: `/NN2/nn2_org': No such file or directory"


I think when making directory in NN1 is error and making directory in NN2 is ok  Why this is happen and can you give solution? 

 		 	   		  

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
may be this can help you ....


On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
Hi seok srry was unable to answer was your problem solved It was a long
time back right :(

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
Hi seok srry was unable to answer was your problem solved It was a long
time back right :(

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
may be this can help you ....


On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
may be this can help you ....


On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
Hi seok srry was unable to answer was your problem solved It was a long
time back right :(

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
may be this can help you ....


On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>

Re: Can you help me to install HDFS Federation and test?

Posted by Visioner Sadak <vi...@gmail.com>.
Hi seok srry was unable to answer was your problem solved It was a long
time back right :(

On Wed, Sep 11, 2013 at 3:07 PM, Oh Seok Keun <oh...@gmail.com> wrote:

> Hello~ I am Rho working in korea
>
> I am trying to install HDFS Federation( with 2.1.0 beta version ) and to
> test
> After 2.1.0 ( for federation  test ) install I have a big trouble when
> file putting test
>
>
> I command to hadoop
> Can you help me to install HDFS Federation and test?
> ./bin/hadoop fs -put test.txt /NN1/
>
> there is error message
> "put: Renames across FileSystems not supported"
>
> But ./bin/hadoop fs -put test.txt hdfs://namnode:8020/NN1/  is ok
>
> Why this is happen? This is very sad to me ^^
> Can you explain why this is happend and give me solution?
>
>
> Additionally
>
> Namenode1 is access to own Namespace( named NN1 ) and Namenode2 is access
> to own Namespace( named NN2 )
> When making directory in namenode1 server
> ./bin/hadoop fs -mkdir /NN1/nn1_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn1_org   is error
>
> Error message is "/NN2/nn1_org': No such file or directory"
>
> I think this is very right
>
> But in namenode2 server
> ./bin/hadoop fs -mkdir /NN1/nn2_org  is ok but ./bin/hadoop fs -mkdir
> /NN2/nn2_org is error
> Error message is "mkdir: `/NN2/nn2_org': No such file or directory"
>
> I think when making directory in NN1 is error and making directory in NN2
> is ok
>
> Why this is happen and can you give solution?
>