You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Sundeep Kambhmapati <ks...@yahoo.co.in> on 2012/10/20 05:33:51 UTC

Namenode shutting down while creating cluster

Hi Users,
My name node is shutting down soon after it is started. 
Here the log. Can some one please help me.

2012-10-19 23:20:42,143 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = sk.r252.0/10.0.2.15
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310
2012-10-19 23:20:42,741 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.0.2.15:54310
2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2012-10-19 23:20:42,747 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2012-10-19 23:20:43,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2012-10-19 23:20:43,231 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2012-10-19 23:20:43,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in 0 seconds.
2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds.
2012-10-19 23:20:43,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 758 msecs
2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
2012-10-19 23:20:47,021 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
2012-10-19 23:20:48,685WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2012-10-19 23:20:48,690 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
        at java.lang.Thread.run(Thread.java:636)
2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310
2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: exiting
2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exiting
2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: exiting
2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: exiting
2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: exiting
2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: exiting
2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: exiting
2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: exiting
2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: exiting
2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: exiting
2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 54310
2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2012-10-19 23:20:48,790 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
         at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2012-10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15

***core-site.xml***
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

***mapred-site.xml***
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>sk.r252.0:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>

***hdfs-site.xml***
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
<property>
    <name>dfs.http.address</name>
    <value>0.0.0.0:50070</value>
  </property>
</configuration>

Can some one please help me.

Regards 
Sundeep

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank you Nitin and Balaji.
I was able to resolve the issue. I used ip address instead and the issue is resolved.



________________________________
 From: Nitin Pawar <ni...@gmail.com>
To: user@hadoop.apache.org 
Cc: Sundeep Kambhmapati <ks...@yahoo.co.in>; "lists@balajin.net" <li...@balajin.net> 
Sent: Saturday, 20 October 2012 11:39 AM
Subject: Re: Namenode shutting down while creating cluster
 
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank you Nitin and Balaji.
I was able to resolve the issue. I used ip address instead and the issue is resolved.



________________________________
 From: Nitin Pawar <ni...@gmail.com>
To: user@hadoop.apache.org 
Cc: Sundeep Kambhmapati <ks...@yahoo.co.in>; "lists@balajin.net" <li...@balajin.net> 
Sent: Saturday, 20 October 2012 11:39 AM
Subject: Re: Namenode shutting down while creating cluster
 
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank you Nitin and Balaji.
I was able to resolve the issue. I used ip address instead and the issue is resolved.



________________________________
 From: Nitin Pawar <ni...@gmail.com>
To: user@hadoop.apache.org 
Cc: Sundeep Kambhmapati <ks...@yahoo.co.in>; "lists@balajin.net" <li...@balajin.net> 
Sent: Saturday, 20 October 2012 11:39 AM
Subject: Re: Namenode shutting down while creating cluster
 
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank you Nitin and Balaji.
I was able to resolve the issue. I used ip address instead and the issue is resolved.



________________________________
 From: Nitin Pawar <ni...@gmail.com>
To: user@hadoop.apache.org 
Cc: Sundeep Kambhmapati <ks...@yahoo.co.in>; "lists@balajin.net" <li...@balajin.net> 
Sent: Saturday, 20 October 2012 11:39 AM
Subject: Re: Namenode shutting down while creating cluster
 
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
sundeep
<name>fs.default.name</name>
  <value>hdfs://sk.r252.0:54310</value>

here sk.r252.0 is not a valid domain name unless you have your own
nameserver. You change it to IP and it should work fine.

On Sat, Oct 20, 2012 at 9:07 PM, Balaji Narayanan (பாலாஜி நாராயணன்)
<li...@balajin.net> wrote:
> Sundeep, what happens when you use the ip instead of name in the config?
>
>
> On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:
>>
>> Thank  You Balaji,
>> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
>> am getting in ifconfig also.
>> ssh sk.r252.0 is sshing to 10.0.2.15
>> ping sk.r252.0 is pinging to 10.0.2.15.
>>
>> Can you please help me with the issue?
>>
>> Regards
>> Sundeep
>>
>>
>> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
>> resolves?
>>
>> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>>
>> Hi Users,
>> My name node is shutting down soon after it is started.
>> Here the log. Can some one please help me.
>>
>> 2012-10-19 23:20:42,143 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=54310
>> 2012-10-19 23:20:42,741 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> sk.r252.0/10.0.2.15:54310
>> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2012-10-19 23:20:42,747 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,074 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2012-10-19 23:20:43,077 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2012-10-19 23:20:43,231 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2012-10-19 23:20:43,239 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files = 1
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Number of files under construction = 0
>> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Image file of size 94 loaded in 0 seconds.
>> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
>> Edits file /app/hadoop/tmp/dfs/
>
>
>
> --
> Thanks
> -balaji
>
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan



-- 
Nitin Pawar

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Sundeep, what happens when you use the ip instead of name in the config?

On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
Sundeep, can you share your core-site.xml and hdfs-site.xml ....

Or check set the value for fs.default.name to hdfs://urip:port/ in
core-site.xml
On Oct 20, 2012 6:46 PM, "Sundeep Kambhmapati" <ks...@yahoo.co.in>
wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
>   ------------------------------
> *From:* Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
> *To:* "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep
> Kambhmapati <ks...@yahoo.co.in>
> *Sent:* Saturday, 20 October 2012 2:12 AM
> *Subject:* Re: Namenode shutting down while creating cluster
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>
>
> --
> Thanks
> -balaji
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan
>
>
>

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
Sundeep, can you share your core-site.xml and hdfs-site.xml ....

Or check set the value for fs.default.name to hdfs://urip:port/ in
core-site.xml
On Oct 20, 2012 6:46 PM, "Sundeep Kambhmapati" <ks...@yahoo.co.in>
wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
>   ------------------------------
> *From:* Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
> *To:* "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep
> Kambhmapati <ks...@yahoo.co.in>
> *Sent:* Saturday, 20 October 2012 2:12 AM
> *Subject:* Re: Namenode shutting down while creating cluster
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>
>
> --
> Thanks
> -balaji
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan
>
>
>

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Sundeep, what happens when you use the ip instead of name in the config?

On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
Sundeep, can you share your core-site.xml and hdfs-site.xml ....

Or check set the value for fs.default.name to hdfs://urip:port/ in
core-site.xml
On Oct 20, 2012 6:46 PM, "Sundeep Kambhmapati" <ks...@yahoo.co.in>
wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
>   ------------------------------
> *From:* Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
> *To:* "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep
> Kambhmapati <ks...@yahoo.co.in>
> *Sent:* Saturday, 20 October 2012 2:12 AM
> *Subject:* Re: Namenode shutting down while creating cluster
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>
>
> --
> Thanks
> -balaji
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan
>
>
>

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Sundeep, what happens when you use the ip instead of name in the config?

On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Nitin Pawar <ni...@gmail.com>.
Sundeep, can you share your core-site.xml and hdfs-site.xml ....

Or check set the value for fs.default.name to hdfs://urip:port/ in
core-site.xml
On Oct 20, 2012 6:46 PM, "Sundeep Kambhmapati" <ks...@yahoo.co.in>
wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
>   ------------------------------
> *From:* Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
> *To:* "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep
> Kambhmapati <ks...@yahoo.co.in>
> *Sent:* Saturday, 20 October 2012 2:12 AM
> *Subject:* Re: Namenode shutting down while creating cluster
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>
>
> --
> Thanks
> -balaji
> --
> http://balajin.net/blog/
> http://flic.kr/balajijegan
>
>
>

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Sundeep, what happens when you use the ip instead of name in the config?

On Saturday, October 20, 2012, Sundeep Kambhmapati wrote:

> Thank  You Balaji,
> I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i
> am getting in ifconfig also.
> ssh sk.r252.0 is sshing to 10.0.2.15
> ping sk.r252.0 is pinging to 10.0.2.15.
>
> Can you please help me with the issue?
>
> Regards
> Sundeep
>
>
> Seems like an issue with resolution of sk.r252.0. Can you ensure that it
> resolves?
>
> On Friday, October 19, 2012, Sundeep Kambhmapati wrote:
>
> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank  You Balaji,
I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i am getting in ifconfig also.
ssh sk.r252.0 is sshing to 10.0.2.15
ping sk.r252.0 is pinging to 10.0.2.15.

Can you please help me with the issue?

Regards
Sundeep



________________________________
 From: Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
To: "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep Kambhmapati <ks...@yahoo.co.in> 
Sent: Saturday, 20 October 2012 2:12 AM
Subject: Re: Namenode shutting down while creating cluster
 

Seems like an issue with resolution of sk.r252.0. Can you ensure that it resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati  wrote:

Hi Users,
>My name node is shutting down soon after it is started. 
>Here the log. Can some one please help me.
>
>
>2012-10-19 23:20:42,143 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>/************************************************************
>STARTUP_MSG: Starting NameNode
>STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>STARTUP_MSG:   args = []
>STARTUP_MSG:   version = 0.20.2
>STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>************************************************************/
>2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310
>2012-10-19 23:20:42,741 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.0.2.15:54310
>2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>2012-10-19 23:20:42,747 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
>2012-10-19 23:20:43,231 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
>2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in 0 seconds.
>2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
>2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds.
>2012-10-19 23:20:43,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 758 msecs
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
>2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
>2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
>2012-10-19 23:20:47,021 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
>2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
>2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
>2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
>2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
>2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
>2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
>2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
>2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
>2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
>2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
>2012-10-19 23:20:48,685WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
>2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
>2012-10-19 23:20:48,690 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
>java.lang.InterruptedException: sleep interrupted
>        at java.lang.Thread.sleep(Native Method)
>        at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>        at java.lang.Thread.run(Thread.java:636)
>2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310
>2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: exiting
>2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exiting
>2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: exiting
>2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: exiting
>2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 54310
>2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>2012-10-19 23:20:48,790 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>         at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
>
>2012-10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>/************************************************************
>SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
>
>***core-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>hadoop.tmp.dir</name>
>  <value>/app/hadoop/tmp</value>
>  <description>A base for other temporary directories.</description>
></property>
>
>
><property>
>  <name>fs.default.name</name>
>  <value>hdfs://sk.r252.0:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
></property>
></configuration>
>
>
>***mapred-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>mapred.job.tracker</name>
>  <value>sk.r252.0:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
></property>
></configuration>
>
>
>***hdfs-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>dfs.replication</name>
>  <value>2</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is created.
>  The default is used if replication is not specified in create time.
>  </description>
></property>
><property>
>    <name>dfs.http.address</name>
>    <value>0.0.0.0:50070</value>
>  </property>
></configuration>
>
>
>Can some one please help me.
>
>
>Regards 
>Sundeep
>
>

-- 
Thanks
-balaji
--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank  You Balaji,
I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i am getting in ifconfig also.
ssh sk.r252.0 is sshing to 10.0.2.15
ping sk.r252.0 is pinging to 10.0.2.15.

Can you please help me with the issue?

Regards
Sundeep



________________________________
 From: Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
To: "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep Kambhmapati <ks...@yahoo.co.in> 
Sent: Saturday, 20 October 2012 2:12 AM
Subject: Re: Namenode shutting down while creating cluster
 

Seems like an issue with resolution of sk.r252.0. Can you ensure that it resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati  wrote:

Hi Users,
>My name node is shutting down soon after it is started. 
>Here the log. Can some one please help me.
>
>
>2012-10-19 23:20:42,143 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>/************************************************************
>STARTUP_MSG: Starting NameNode
>STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>STARTUP_MSG:   args = []
>STARTUP_MSG:   version = 0.20.2
>STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>************************************************************/
>2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310
>2012-10-19 23:20:42,741 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.0.2.15:54310
>2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>2012-10-19 23:20:42,747 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
>2012-10-19 23:20:43,231 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
>2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in 0 seconds.
>2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
>2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds.
>2012-10-19 23:20:43,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 758 msecs
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
>2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
>2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
>2012-10-19 23:20:47,021 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
>2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
>2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
>2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
>2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
>2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
>2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
>2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
>2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
>2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
>2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
>2012-10-19 23:20:48,685WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
>2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
>2012-10-19 23:20:48,690 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
>java.lang.InterruptedException: sleep interrupted
>        at java.lang.Thread.sleep(Native Method)
>        at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>        at java.lang.Thread.run(Thread.java:636)
>2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310
>2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: exiting
>2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exiting
>2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: exiting
>2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: exiting
>2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 54310
>2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>2012-10-19 23:20:48,790 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>         at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
>
>2012-10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>/************************************************************
>SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
>
>***core-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>hadoop.tmp.dir</name>
>  <value>/app/hadoop/tmp</value>
>  <description>A base for other temporary directories.</description>
></property>
>
>
><property>
>  <name>fs.default.name</name>
>  <value>hdfs://sk.r252.0:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
></property>
></configuration>
>
>
>***mapred-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>mapred.job.tracker</name>
>  <value>sk.r252.0:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
></property>
></configuration>
>
>
>***hdfs-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>dfs.replication</name>
>  <value>2</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is created.
>  The default is used if replication is not specified in create time.
>  </description>
></property>
><property>
>    <name>dfs.http.address</name>
>    <value>0.0.0.0:50070</value>
>  </property>
></configuration>
>
>
>Can some one please help me.
>
>
>Regards 
>Sundeep
>
>

-- 
Thanks
-balaji
--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank  You Balaji,
I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i am getting in ifconfig also.
ssh sk.r252.0 is sshing to 10.0.2.15
ping sk.r252.0 is pinging to 10.0.2.15.

Can you please help me with the issue?

Regards
Sundeep



________________________________
 From: Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
To: "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep Kambhmapati <ks...@yahoo.co.in> 
Sent: Saturday, 20 October 2012 2:12 AM
Subject: Re: Namenode shutting down while creating cluster
 

Seems like an issue with resolution of sk.r252.0. Can you ensure that it resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati  wrote:

Hi Users,
>My name node is shutting down soon after it is started. 
>Here the log. Can some one please help me.
>
>
>2012-10-19 23:20:42,143 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>/************************************************************
>STARTUP_MSG: Starting NameNode
>STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>STARTUP_MSG:   args = []
>STARTUP_MSG:   version = 0.20.2
>STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>************************************************************/
>2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310
>2012-10-19 23:20:42,741 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.0.2.15:54310
>2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>2012-10-19 23:20:42,747 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
>2012-10-19 23:20:43,231 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
>2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in 0 seconds.
>2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
>2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds.
>2012-10-19 23:20:43,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 758 msecs
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
>2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
>2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
>2012-10-19 23:20:47,021 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
>2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
>2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
>2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
>2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
>2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
>2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
>2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
>2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
>2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
>2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
>2012-10-19 23:20:48,685WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
>2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
>2012-10-19 23:20:48,690 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
>java.lang.InterruptedException: sleep interrupted
>        at java.lang.Thread.sleep(Native Method)
>        at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>        at java.lang.Thread.run(Thread.java:636)
>2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310
>2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: exiting
>2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exiting
>2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: exiting
>2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: exiting
>2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 54310
>2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>2012-10-19 23:20:48,790 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>         at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
>
>2012-10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>/************************************************************
>SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
>
>***core-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>hadoop.tmp.dir</name>
>  <value>/app/hadoop/tmp</value>
>  <description>A base for other temporary directories.</description>
></property>
>
>
><property>
>  <name>fs.default.name</name>
>  <value>hdfs://sk.r252.0:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
></property>
></configuration>
>
>
>***mapred-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>mapred.job.tracker</name>
>  <value>sk.r252.0:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
></property>
></configuration>
>
>
>***hdfs-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>dfs.replication</name>
>  <value>2</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is created.
>  The default is used if replication is not specified in create time.
>  </description>
></property>
><property>
>    <name>dfs.http.address</name>
>    <value>0.0.0.0:50070</value>
>  </property>
></configuration>
>
>
>Can some one please help me.
>
>
>Regards 
>Sundeep
>
>

-- 
Thanks
-balaji
--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by Sundeep Kambhmapati <ks...@yahoo.co.in>.
Thank  You Balaji,
I checked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i am getting in ifconfig also.
ssh sk.r252.0 is sshing to 10.0.2.15
ping sk.r252.0 is pinging to 10.0.2.15.

Can you please help me with the issue?

Regards
Sundeep



________________________________
 From: Balaji Narayanan (பாலாஜி நாராயணன்) <li...@balajin.net>
To: "user@hadoop.apache.org" <us...@hadoop.apache.org>; Sundeep Kambhmapati <ks...@yahoo.co.in> 
Sent: Saturday, 20 October 2012 2:12 AM
Subject: Re: Namenode shutting down while creating cluster
 

Seems like an issue with resolution of sk.r252.0. Can you ensure that it resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati  wrote:

Hi Users,
>My name node is shutting down soon after it is started. 
>Here the log. Can some one please help me.
>
>
>2012-10-19 23:20:42,143 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>/************************************************************
>STARTUP_MSG: Starting NameNode
>STARTUP_MSG:   host = sk.r252.0/10.0.2.15
>STARTUP_MSG:   args = []
>STARTUP_MSG:   version = 0.20.2
>STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>************************************************************/
>2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=54310
>2012-10-19 23:20:42,741 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.0.2.15:54310
>2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>2012-10-19 23:20:42,747 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>2012-10-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
>2012-10-19 23:20:43,231 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>2012-10-19 23:20:43,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
>2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
>2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in 0 seconds.
>2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
>2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds.
>2012-10-19 23:20:43,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 758 msecs
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
>2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
>2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
>2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
>2012-10-19 23:20:47,021 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
>2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
>2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: starting
>2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
>2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: starting
>2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: starting
>2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: starting
>2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting
>2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: starting
>2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: starting
>2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting
>2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
>2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: starting
>2012-10-19 23:20:48,685WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
>2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
>2012-10-19 23:20:48,690 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
>java.lang.InterruptedException: sleep interrupted
>        at java.lang.Thread.sleep(Native Method)
>        at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>        at java.lang.Thread.run(Thread.java:636)
>2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310
>2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: exiting
>2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exiting
>2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: exiting
>2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 54310: exiting
>2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: exiting
>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: exiting
>2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: exiting
>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 54310
>2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>2012-10-19 23:20:48,790 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>         at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
>
>2012-10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>/************************************************************
>SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
>
>***core-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>hadoop.tmp.dir</name>
>  <value>/app/hadoop/tmp</value>
>  <description>A base for other temporary directories.</description>
></property>
>
>
><property>
>  <name>fs.default.name</name>
>  <value>hdfs://sk.r252.0:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
></property>
></configuration>
>
>
>***mapred-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>mapred.job.tracker</name>
>  <value>sk.r252.0:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
></property>
></configuration>
>
>
>***hdfs-site.xml***
><?xml version="1.0"?>
><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
><!-- Put site-specific property overrides in this file. -->
>
>
><configuration>
><property>
>  <name>dfs.replication</name>
>  <value>2</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is created.
>  The default is used if replication is not specified in create time.
>  </description>
></property>
><property>
>    <name>dfs.http.address</name>
>    <value>0.0.0.0:50070</value>
>  </property>
></configuration>
>
>
>Can some one please help me.
>
>
>Regards 
>Sundeep
>
>

-- 
Thanks
-balaji
--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Seems like an issue with resolution of sk.r252.0. Can you ensure that it
resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati wrote:

> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Seems like an issue with resolution of sk.r252.0. Can you ensure that it
resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati wrote:

> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Seems like an issue with resolution of sk.r252.0. Can you ensure that it
resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati wrote:

> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan

Re: Namenode shutting down while creating cluster

Posted by "Balaji Narayanan (பாலாஜி நாராயணன்)" <li...@balajin.net>.
Seems like an issue with resolution of sk.r252.0. Can you ensure that it
resolves?

On Friday, October 19, 2012, Sundeep Kambhmapati wrote:

> Hi Users,
> My name node is shutting down soon after it is started.
> Here the log. Can some one please help me.
>
> 2012-10-19 23:20:42,143 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = sk.r252.0/10.0.2.15
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=54310
> 2012-10-19 23:20:42,741 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/
> 10.0.2.15:54310
> 2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2012-10-19 23:20:42,747 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,074 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2012-10-19 23:20:43,077 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-10-19 23:20:43,231 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2012-10-19 23:20:43,239 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2012-10-19 23:20:43,359 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 1
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 0
> 2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 loaded in 0 seconds.
> 2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0
> loaded in 0 seconds.
> 2012-10-19 23:20:43,415 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 94 saved in 0 seconds.
> 2012-10-19 23:20:43,612 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 758 msecs
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
> = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> under-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
>  over-replicated blocks = 0
> 2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-19 23:20:44,711 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1. Opening the listener on 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50070
> webServer.getConnectors()[0].getLocalPort() returned 50070
> 2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2012-10-19 23:20:44,715 INFO org.mortbay.log: jetty-6.1.14
> 2012-10-19 23:20:47,021 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,022 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2012-10-19 23:20:47,067 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 54310: starting
> 2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: starting
> 2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2012-10-19 23:20:47,106 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: starting
> 2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: starting
> 2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: starting
> 2012-10-19 23:20:47,165 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: starting
> 2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: starting
> 2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: starting
> 2012-10-19 23:20:47,803 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: starting
> 2012-10-19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: starting
> 2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: starting
> 2012-10-19 23:20:48,685 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> thread received InterruptedException.java.lang.InterruptedException: sleep
> interrupted
> 2012-10-19 23:20:48,691 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> transactions: 0 Total time for transactions(ms): 0Number of transactions
> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-10-19 23:20:48,690 INFO
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> Monitor
> java.lang.InterruptedException: sleep interrupted
>         at java.lang.Thread.sleep(Native Method)
>         at
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
>         at java.lang.Thread.run(Thread.java:636)
> 2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 54310: exiting
> 2012-10-19 23:20:48,780 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 54310: exiting
> 2012-10-19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 54310: exiting
> 2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 54310: exiting
> 2012-10-19 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 54310: exiting
> 2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 54310: exiting
> 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 54310: exiting
> 2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 54310
> 2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-10-19 23:20:48,790 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>          at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2012-10-19 23:20:48,995 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15
>
> ***core-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://sk.r252.0:54310</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> ***mapred-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>sk.r252.0:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> </property>
> </configuration>
>
> ***hdfs-site.xml***
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>0.0.0.0:50070</value>
>   </property>
> </configuration>
>
> Can some one please help me.
>
> Regards
> Sundeep
>
>

-- 
Thanks
-balaji

--
http://balajin.net/blog/
http://flic.kr/balajijegan