You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by janesh mishra <ja...@gmail.com> on 2013/02/15 13:26:48 UTC

getimage failed in Name Node Log

Hi,

I am new in Hadoop and i set the hadoop cluster with the help of Michell
Noll Multi-Node setup (
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/).
When i setup the single Node Hadoop then every things works fine.

But in Multi Node setup i found that my *fsimage and editlogs files are not
updated* on SNN, roll back of edit is done i have edit.new on NN

 *Logs Form NN: *

2013-02-14 19:13:52,468
*ERROR*org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused

2013-02-14 19:13:52,468
*ERROR*org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused

2013-02-14 19:13:52,477 *WARN org.mortbay.log: /getimage:
java.io.IOException: GetImage failed. java.net.ConnectException: Connection
refused *

 Logs From SNN:

--------------

2013-02-14 19:13:52,350 INFO
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: *Posted URL
namenode:50070putimage=1&port=50090&machine=0.0.0.0&token=32:1989419481:0:1360849430000:1360849122845
*

2013-02-14 19:13:52,374
*ERROR*org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Exception in
doCheckpoint:

2013-02-14 19:13:52,375
*ERROR*org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.io.FileNotFoundException:

*
http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:1360849122845
*

at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1613)


atorg.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:160)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:377)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:418)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:312)
at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:275)
at java.lang.Thread.run(Thread.java:722)

 My setup includes

Version : hadoop-1.0.4

   1. Name Node (192.168.0.105)

   2. Secondary Name Node (192.168.0.101)

   3. Data Node (192.168.0.100)

Name Node also works as Data Node.

 Conf File For Name Node:

*core-hdfs.xml *

* ------------- *

 <?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 <!-- Put site-specific property overrides in this file. -->

 <configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>/app/hadoop/tmp</value>

<description>A base for other temporary directories.</description>

</property>

 <property>

<name>fs.default.name</name>

<value>hdfs://namenode:54310</value>

<description>The name of the default file system. A URI whose

scheme and authority determine the FileSystem implementation. The

uri's scheme determines the config property (fs.SCHEME.impl) naming

the FileSystem implementation class. The uri's authority is used to

determine the host, port, etc. for a filesystem.</description>

</property>

 <property>

<name>fs.checkpoint.period</name>

<value>300</value>

<description>The number of seconds between two periodic checkpoints.

</description>

</property>

 </configuration>

 *hdfs-site.xml *

* ------------- *

* *

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 <!-- Put site-specific property overrides in this file. -->

 <configuration>

<property>

<name>dfs.replication</name>

<value>2</value>

<description>Default block replication.

The actual number of replications can be specified when the file is
created.

The default is used if replication is not specified in create time.

</description>

</property>

 <property>

<name>dfs.hosts</name>

<value>/usr/local/hadoop/includehosts</value>

<description>ips that works as datanode</description>

</property>

 <property>

<name>dfs.namenode.secondary.http-address</name>

<value>secondarynamenode:50090</value>

<description>

The address and the base port on which the dfs NameNode Web UI will listen.

If the port is 0, the server will start on a free port.

</description>

</property>

 <property>

<name>dfs.http.address</name>

<value>namenode:50070</value>

<description>

The address and the base port on which the dfs NameNode Web UI will listen.

If the port is 0, the server will start on a free port.

</description>

</property>

 </configuration>

 I sync these file to all my nodes. (I read somewhere in Cloud Era doc that
all nodes should have same conf files).

 Please help me out.

 Thanks

Janesh

RE: getimage failed in Name Node Log

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Janesh,

 

I think your SNN may be starting up with the wrong IP, I'm sure the machine
parameter should say 192.168.0.101?

 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45

 

Are you able to retrieve the fsimage from the SNN from the command line?
Using curl or wget:

 

wget  'http://192.168.0.105:50070/getimage?getimage=1' -O fsimage.dmp

 

If this actually retrieves an error page then the NN is reachable from the
SNN and the port is definitely open. Otherwise double check that this is not
due to the OS firewall blocking the connection, assuming it is on?

 

That said the PrivilegedActionException in the error may actually mean it's 

 

Vijay

 

From: janesh mishra [mailto:janeshmishra@gmail.com] 
Sent: 15 February 2013 12:27
To: user@hadoop.apache.org
Subject: getimage failed in Name Node Log

 

Hi, 

I am new in Hadoop and i set the hadoop cluster with the help of Michell
Noll Multi-Node setup
(http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-
node-cluster/). When i setup the single Node Hadoop then every things works
fine. 

But in Multi Node setup i found that my fsimage and editlogs files are not
updated on SNN, roll back of edit is done i have edit.new on NN 

Logs Form NN: 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,477 WARN org.mortbay.log: /getimage:
java.io.IOException: GetImage failed. java.net.ConnectException: Connection
refused 

Logs From SNN: 

-------------- 

2013-02-14 19:13:52,350 INFO
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL
namenode:50070putimage=1&port=50090&machine=0.0.0.0&token=32:1989419481:0:13
60849430000:1360849122845 

2013-02-14 19:13:52,374 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
doCheckpoint:

2013-02-14 19:13:52,375 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.io.FileNotFoundException: 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45 

at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection
.java:1613) 

atorg.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(Trans
ferFsImage.java:160) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(Secondar
yNameNode.java:377) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(Second
aryNameNode.java:418) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNam
eNode.java:312) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNo
de.java:275) at java.lang.Thread.run(Thread.java:722) 

My setup includes 

Version : hadoop-1.0.4 

   1. Name Node (192.168.0.105) 

   2. Secondary Name Node (192.168.0.101) 

   3. Data Node (192.168.0.100) 

Name Node also works as Data Node. 

Conf File For Name Node: 

core-hdfs.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>hadoop.tmp.dir</name> 

<value>/app/hadoop/tmp</value> 

<description>A base for other temporary directories.</description> 

</property> 

<property> 

<name>fs.default.name</name> 

<value>hdfs://namenode:54310</value> 

<description>The name of the default file system. A URI whose 

scheme and authority determine the FileSystem implementation. The 

uri's scheme determines the config property (fs.SCHEME.impl) naming 

the FileSystem implementation class. The uri's authority is used to 

determine the host, port, etc. for a filesystem.</description> 

</property> 

<property> 

<name>fs.checkpoint.period</name> 

<value>300</value> 

<description>The number of seconds between two periodic checkpoints. 

</description> 

</property> 

</configuration> 

hdfs-site.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>dfs.replication</name> 

<value>2</value> 

<description>Default block replication. 

The actual number of replications can be specified when the file is created.


The default is used if replication is not specified in create time. 

</description> 

</property> 

<property> 

<name>dfs.hosts</name> 

<value>/usr/local/hadoop/includehosts</value> 

<description>ips that works as datanode</description> 

</property> 

<property> 

<name>dfs.namenode.secondary.http-address</name> 

<value>secondarynamenode:50090</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

<property> 

<name>dfs.http.address</name> 

<value>namenode:50070</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

</configuration> 

I sync these file to all my nodes. (I read somewhere in Cloud Era doc that
all nodes should have same conf files). 

Please help me out. 

Thanks 

Janesh 


RE: getimage failed in Name Node Log

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Janesh,

 

I think your SNN may be starting up with the wrong IP, I'm sure the machine
parameter should say 192.168.0.101?

 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45

 

Are you able to retrieve the fsimage from the SNN from the command line?
Using curl or wget:

 

wget  'http://192.168.0.105:50070/getimage?getimage=1' -O fsimage.dmp

 

If this actually retrieves an error page then the NN is reachable from the
SNN and the port is definitely open. Otherwise double check that this is not
due to the OS firewall blocking the connection, assuming it is on?

 

That said the PrivilegedActionException in the error may actually mean it's 

 

Vijay

 

From: janesh mishra [mailto:janeshmishra@gmail.com] 
Sent: 15 February 2013 12:27
To: user@hadoop.apache.org
Subject: getimage failed in Name Node Log

 

Hi, 

I am new in Hadoop and i set the hadoop cluster with the help of Michell
Noll Multi-Node setup
(http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-
node-cluster/). When i setup the single Node Hadoop then every things works
fine. 

But in Multi Node setup i found that my fsimage and editlogs files are not
updated on SNN, roll back of edit is done i have edit.new on NN 

Logs Form NN: 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,477 WARN org.mortbay.log: /getimage:
java.io.IOException: GetImage failed. java.net.ConnectException: Connection
refused 

Logs From SNN: 

-------------- 

2013-02-14 19:13:52,350 INFO
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL
namenode:50070putimage=1&port=50090&machine=0.0.0.0&token=32:1989419481:0:13
60849430000:1360849122845 

2013-02-14 19:13:52,374 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
doCheckpoint:

2013-02-14 19:13:52,375 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.io.FileNotFoundException: 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45 

at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection
.java:1613) 

atorg.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(Trans
ferFsImage.java:160) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(Secondar
yNameNode.java:377) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(Second
aryNameNode.java:418) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNam
eNode.java:312) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNo
de.java:275) at java.lang.Thread.run(Thread.java:722) 

My setup includes 

Version : hadoop-1.0.4 

   1. Name Node (192.168.0.105) 

   2. Secondary Name Node (192.168.0.101) 

   3. Data Node (192.168.0.100) 

Name Node also works as Data Node. 

Conf File For Name Node: 

core-hdfs.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>hadoop.tmp.dir</name> 

<value>/app/hadoop/tmp</value> 

<description>A base for other temporary directories.</description> 

</property> 

<property> 

<name>fs.default.name</name> 

<value>hdfs://namenode:54310</value> 

<description>The name of the default file system. A URI whose 

scheme and authority determine the FileSystem implementation. The 

uri's scheme determines the config property (fs.SCHEME.impl) naming 

the FileSystem implementation class. The uri's authority is used to 

determine the host, port, etc. for a filesystem.</description> 

</property> 

<property> 

<name>fs.checkpoint.period</name> 

<value>300</value> 

<description>The number of seconds between two periodic checkpoints. 

</description> 

</property> 

</configuration> 

hdfs-site.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>dfs.replication</name> 

<value>2</value> 

<description>Default block replication. 

The actual number of replications can be specified when the file is created.


The default is used if replication is not specified in create time. 

</description> 

</property> 

<property> 

<name>dfs.hosts</name> 

<value>/usr/local/hadoop/includehosts</value> 

<description>ips that works as datanode</description> 

</property> 

<property> 

<name>dfs.namenode.secondary.http-address</name> 

<value>secondarynamenode:50090</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

<property> 

<name>dfs.http.address</name> 

<value>namenode:50070</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

</configuration> 

I sync these file to all my nodes. (I read somewhere in Cloud Era doc that
all nodes should have same conf files). 

Please help me out. 

Thanks 

Janesh 


RE: getimage failed in Name Node Log

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Janesh,

 

I think your SNN may be starting up with the wrong IP, I'm sure the machine
parameter should say 192.168.0.101?

 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45

 

Are you able to retrieve the fsimage from the SNN from the command line?
Using curl or wget:

 

wget  'http://192.168.0.105:50070/getimage?getimage=1' -O fsimage.dmp

 

If this actually retrieves an error page then the NN is reachable from the
SNN and the port is definitely open. Otherwise double check that this is not
due to the OS firewall blocking the connection, assuming it is on?

 

That said the PrivilegedActionException in the error may actually mean it's 

 

Vijay

 

From: janesh mishra [mailto:janeshmishra@gmail.com] 
Sent: 15 February 2013 12:27
To: user@hadoop.apache.org
Subject: getimage failed in Name Node Log

 

Hi, 

I am new in Hadoop and i set the hadoop cluster with the help of Michell
Noll Multi-Node setup
(http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-
node-cluster/). When i setup the single Node Hadoop then every things works
fine. 

But in Multi Node setup i found that my fsimage and editlogs files are not
updated on SNN, roll back of edit is done i have edit.new on NN 

Logs Form NN: 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,477 WARN org.mortbay.log: /getimage:
java.io.IOException: GetImage failed. java.net.ConnectException: Connection
refused 

Logs From SNN: 

-------------- 

2013-02-14 19:13:52,350 INFO
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL
namenode:50070putimage=1&port=50090&machine=0.0.0.0&token=32:1989419481:0:13
60849430000:1360849122845 

2013-02-14 19:13:52,374 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
doCheckpoint:

2013-02-14 19:13:52,375 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.io.FileNotFoundException: 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45 

at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection
.java:1613) 

atorg.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(Trans
ferFsImage.java:160) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(Secondar
yNameNode.java:377) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(Second
aryNameNode.java:418) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNam
eNode.java:312) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNo
de.java:275) at java.lang.Thread.run(Thread.java:722) 

My setup includes 

Version : hadoop-1.0.4 

   1. Name Node (192.168.0.105) 

   2. Secondary Name Node (192.168.0.101) 

   3. Data Node (192.168.0.100) 

Name Node also works as Data Node. 

Conf File For Name Node: 

core-hdfs.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>hadoop.tmp.dir</name> 

<value>/app/hadoop/tmp</value> 

<description>A base for other temporary directories.</description> 

</property> 

<property> 

<name>fs.default.name</name> 

<value>hdfs://namenode:54310</value> 

<description>The name of the default file system. A URI whose 

scheme and authority determine the FileSystem implementation. The 

uri's scheme determines the config property (fs.SCHEME.impl) naming 

the FileSystem implementation class. The uri's authority is used to 

determine the host, port, etc. for a filesystem.</description> 

</property> 

<property> 

<name>fs.checkpoint.period</name> 

<value>300</value> 

<description>The number of seconds between two periodic checkpoints. 

</description> 

</property> 

</configuration> 

hdfs-site.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>dfs.replication</name> 

<value>2</value> 

<description>Default block replication. 

The actual number of replications can be specified when the file is created.


The default is used if replication is not specified in create time. 

</description> 

</property> 

<property> 

<name>dfs.hosts</name> 

<value>/usr/local/hadoop/includehosts</value> 

<description>ips that works as datanode</description> 

</property> 

<property> 

<name>dfs.namenode.secondary.http-address</name> 

<value>secondarynamenode:50090</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

<property> 

<name>dfs.http.address</name> 

<value>namenode:50070</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

</configuration> 

I sync these file to all my nodes. (I read somewhere in Cloud Era doc that
all nodes should have same conf files). 

Please help me out. 

Thanks 

Janesh 


RE: getimage failed in Name Node Log

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Janesh,

 

I think your SNN may be starting up with the wrong IP, I'm sure the machine
parameter should say 192.168.0.101?

 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45

 

Are you able to retrieve the fsimage from the SNN from the command line?
Using curl or wget:

 

wget  'http://192.168.0.105:50070/getimage?getimage=1' -O fsimage.dmp

 

If this actually retrieves an error page then the NN is reachable from the
SNN and the port is definitely open. Otherwise double check that this is not
due to the OS firewall blocking the connection, assuming it is on?

 

That said the PrivilegedActionException in the error may actually mean it's 

 

Vijay

 

From: janesh mishra [mailto:janeshmishra@gmail.com] 
Sent: 15 February 2013 12:27
To: user@hadoop.apache.org
Subject: getimage failed in Name Node Log

 

Hi, 

I am new in Hadoop and i set the hadoop cluster with the help of Michell
Noll Multi-Node setup
(http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-
node-cluster/). When i setup the single Node Hadoop then every things works
fine. 

But in Multi Node setup i found that my fsimage and editlogs files are not
updated on SNN, roll back of edit is done i have edit.new on NN 

Logs Form NN: 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,468 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused 

2013-02-14 19:13:52,477 WARN org.mortbay.log: /getimage:
java.io.IOException: GetImage failed. java.net.ConnectException: Connection
refused 

Logs From SNN: 

-------------- 

2013-02-14 19:13:52,350 INFO
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL
namenode:50070putimage=1&port=50090&machine=0.0.0.0&token=32:1989419481:0:13
60849430000:1360849122845 

2013-02-14 19:13:52,374 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
doCheckpoint:

2013-02-14 19:13:52,375 ERROR
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.io.FileNotFoundException: 

http://namenode:50070/getimage?putimage=1
<http://namenode:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=
-32:1989419481:0:1360849430000:1360849122845>
&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360849430000:13608491228
45 

at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection
.java:1613) 

atorg.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(Trans
ferFsImage.java:160) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(Secondar
yNameNode.java:377) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(Second
aryNameNode.java:418) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNam
eNode.java:312) at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNo
de.java:275) at java.lang.Thread.run(Thread.java:722) 

My setup includes 

Version : hadoop-1.0.4 

   1. Name Node (192.168.0.105) 

   2. Secondary Name Node (192.168.0.101) 

   3. Data Node (192.168.0.100) 

Name Node also works as Data Node. 

Conf File For Name Node: 

core-hdfs.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>hadoop.tmp.dir</name> 

<value>/app/hadoop/tmp</value> 

<description>A base for other temporary directories.</description> 

</property> 

<property> 

<name>fs.default.name</name> 

<value>hdfs://namenode:54310</value> 

<description>The name of the default file system. A URI whose 

scheme and authority determine the FileSystem implementation. The 

uri's scheme determines the config property (fs.SCHEME.impl) naming 

the FileSystem implementation class. The uri's authority is used to 

determine the host, port, etc. for a filesystem.</description> 

</property> 

<property> 

<name>fs.checkpoint.period</name> 

<value>300</value> 

<description>The number of seconds between two periodic checkpoints. 

</description> 

</property> 

</configuration> 

hdfs-site.xml 

------------- 

<?xml version="1.0"?> 

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 

<property> 

<name>dfs.replication</name> 

<value>2</value> 

<description>Default block replication. 

The actual number of replications can be specified when the file is created.


The default is used if replication is not specified in create time. 

</description> 

</property> 

<property> 

<name>dfs.hosts</name> 

<value>/usr/local/hadoop/includehosts</value> 

<description>ips that works as datanode</description> 

</property> 

<property> 

<name>dfs.namenode.secondary.http-address</name> 

<value>secondarynamenode:50090</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

<property> 

<name>dfs.http.address</name> 

<value>namenode:50070</value> 

<description> 

The address and the base port on which the dfs NameNode Web UI will listen. 

If the port is 0, the server will start on a free port. 

</description> 

</property> 

</configuration> 

I sync these file to all my nodes. (I read somewhere in Cloud Era doc that
all nodes should have same conf files). 

Please help me out. 

Thanks 

Janesh