You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Praveen Sripati <pr...@gmail.com> on 2011/07/21 16:36:24 UTC

NodeManager not able to connect to the ResourceManager (MRv2)

Hi,

I followed the below instructions to compile the MRv2 code.

http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INSTALL

I start the resourcemanager and then the nodemanager and see the following
error in the yarn-praveensripati-nodemanager-master.log file.

2011-07-21 19:39:54,125 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Connected
to ResourceManager at 0.0.0.0:8025
2011-07-21 19:39:58,151 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /0.0.0.0:8025. Already tried 0 time(s).
2011-07-21 19:40:01,154 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /0.0.0.0:8025. Already tried 1 time(s).
2011-07-21 19:40:05,158 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /0.0.0.0:8025. Already tried 2 time(s).
...............
...............
2011-07-21 19:40:32,192 ERROR
org.apache.hadoop.yarn.service.CompositeService: Error starting services
org.apache.hadoop.yarn.server.nodemanager.NodeManager
org.apache.avro.AvroRuntimeException:
java.lang.reflect.UndeclaredThrowableException
        at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:139)
...............
...............
Caused by: java.io.IOException: Call to /0.0.0.0:8025 failed on local
exception: java.net.NoRouteToHostException: No route to host
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095)
        at org.apache.hadoop.ipc.Client.call(Client.java:1063)

Did a telnet to the 8025 port and it got connected.

telnet 0.0.0.0 8025
Trying 0.0.0.0...
Connected to 0.0.0.0.

telnet 127.0.0.1 8025
Trying 127.0.0.1...
Connected to 127.0.0.1.

Has anyone faced a similar problem? Also, where are all the possible
defaults specified (~ core-default.html) for MRv2?

Thanks,
Praveen

Problem Setting Cluster with no public nodes adresses

Posted by Mohamed Riadh Trad <Mo...@inria.fr>.
Dear all,

I am trying to set up hadoop on a 18 node cluster.

The problem is that cluster nodes do not have public IPs, such as, in order to access node1, I had first to connect to the frontal node.

Let a be the ip address of my frontal node.

I have configured my .ssh/config with proxycommand:

# Cluster
Host node*.a
ProxyCommand ssh -q a /usr/bin/nc `echo %h | cut -d"." -f1` 22
#

and added 3 nodes in the masters conf/slave:
node001.a
node002.a
node003.a

then I have formatted my namenode and started dfs.

the issue, is that when I look for datanodes, there is only one instead of 3... the displayed datanode name changes every time that I want to display the datanodes lit.

Any Idea, how to resolve this issue??

I am suspecting the framework not differentiating hosts since I used the proxycommand.

Bests,



Trad Mohamed Riadh, M.Sc, Ing.
PhD. student
INRIA-TELECOM PARISTECH - ENPC School of International Management

Office: 11-15
Phone: (33)-1 39 63 59 33
Fax: (33)-1 39 63 56 74
Email: riadh.trad@inria.fr
Home page: http://www-rocq.inria.fr/who/Mohamed.Trad/