You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by orahad bigdata <or...@gmail.com> on 2013/08/27 21:23:42 UTC

Namenode joining error in HA configuration

Hi All,

I'm new in Hadoop administration, Can someone please help me?

Hadoop-version :- 2.0.5 alpha and using QJM

I'm getting below error messages while starting Hadoop hdfs using 'start-dfs.sh'

2013-01-23 03:25:43,208 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
loaded in 0 seconds.
2013-01-23 03:25:43,209 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
2013-01-23 03:25:43,217 INFO
org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
entries 0 lookups
2013-01-23 03:25:43,217 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 1692 msecs
2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
Socket Reader #1 for port 8020
2013-01-23 03:25:43,592 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemState MBean
2013-01-23 03:25:43,699 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
started for standby state
2013-01-23 03:25:43,822 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
started for active state
2013-01-23 03:25:43,822 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
started for standby state
2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
server on 8020
2013-01-23 03:25:43,829 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system...
2013-01-23 03:25:43,831 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system stopped.
2013-01-23 03:25:43,832 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system shutdown complete.
2013-01-23 03:25:43,835 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
join
org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
        at org.apache.hadoop.util.Shell.run(Shell.java:129)
        at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)


Thanks

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
Thanks. I don't believe we support Solaris 10 (i.e. we do not
intensively test over it), but what that piece behind the failure does
is execute "bash -c exec 'df -k /namedirpath'". If such a thing cannot
run on Solaris 10, thats probably the central issue for you right now
(though there may be other issues as well).

On Wed, Aug 28, 2013 at 1:39 PM, orahad bigdata <or...@gmail.com> wrote:
> Hi Harsh,
>
> I'm using Solaris 10 OS and java 1.6.
>
> Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
> hadoop user.
>
> Regards
> Jitendra
>
> On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> What OS are you starting this on?
>>
>> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
>> as user "hadoop"?
>>
>> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Hi All,
>> >
>> > I'm new in Hadoop administration, Can someone please help me?
>> >
>> > Hadoop-version :- 2.0.5 alpha and using QJM
>> >
>> > I'm getting below error messages while starting Hadoop hdfs using
>> > 'start-dfs.sh'
>> >
>> > 2013-01-23 03:25:43,208 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
>> > loaded in 0 seconds.
>> > 2013-01-23 03:25:43,209 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
>> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
>> > entries 0 lookups
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
>> > FSImage in 1692 msecs
>> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
>> > Socket Reader #1 for port 8020
>> > 2013-01-23 03:25:43,592 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> > FSNamesystemState MBean
>> > 2013-01-23 03:25:43,699 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for active state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
>> > server on 8020
>> > 2013-01-23 03:25:43,829 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>> > metrics system...
>> > 2013-01-23 03:25:43,831 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system stopped.
>> > 2013-01-23 03:25:43,832 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system shutdown complete.
>> > 2013-01-23 03:25:43,835 FATAL
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
>> > join
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>> >
>> >
>> > Thanks
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
Thanks. I don't believe we support Solaris 10 (i.e. we do not
intensively test over it), but what that piece behind the failure does
is execute "bash -c exec 'df -k /namedirpath'". If such a thing cannot
run on Solaris 10, thats probably the central issue for you right now
(though there may be other issues as well).

On Wed, Aug 28, 2013 at 1:39 PM, orahad bigdata <or...@gmail.com> wrote:
> Hi Harsh,
>
> I'm using Solaris 10 OS and java 1.6.
>
> Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
> hadoop user.
>
> Regards
> Jitendra
>
> On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> What OS are you starting this on?
>>
>> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
>> as user "hadoop"?
>>
>> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Hi All,
>> >
>> > I'm new in Hadoop administration, Can someone please help me?
>> >
>> > Hadoop-version :- 2.0.5 alpha and using QJM
>> >
>> > I'm getting below error messages while starting Hadoop hdfs using
>> > 'start-dfs.sh'
>> >
>> > 2013-01-23 03:25:43,208 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
>> > loaded in 0 seconds.
>> > 2013-01-23 03:25:43,209 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
>> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
>> > entries 0 lookups
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
>> > FSImage in 1692 msecs
>> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
>> > Socket Reader #1 for port 8020
>> > 2013-01-23 03:25:43,592 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> > FSNamesystemState MBean
>> > 2013-01-23 03:25:43,699 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for active state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
>> > server on 8020
>> > 2013-01-23 03:25:43,829 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>> > metrics system...
>> > 2013-01-23 03:25:43,831 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system stopped.
>> > 2013-01-23 03:25:43,832 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system shutdown complete.
>> > 2013-01-23 03:25:43,835 FATAL
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
>> > join
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>> >
>> >
>> > Thanks
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
Thanks. I don't believe we support Solaris 10 (i.e. we do not
intensively test over it), but what that piece behind the failure does
is execute "bash -c exec 'df -k /namedirpath'". If such a thing cannot
run on Solaris 10, thats probably the central issue for you right now
(though there may be other issues as well).

On Wed, Aug 28, 2013 at 1:39 PM, orahad bigdata <or...@gmail.com> wrote:
> Hi Harsh,
>
> I'm using Solaris 10 OS and java 1.6.
>
> Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
> hadoop user.
>
> Regards
> Jitendra
>
> On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> What OS are you starting this on?
>>
>> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
>> as user "hadoop"?
>>
>> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Hi All,
>> >
>> > I'm new in Hadoop administration, Can someone please help me?
>> >
>> > Hadoop-version :- 2.0.5 alpha and using QJM
>> >
>> > I'm getting below error messages while starting Hadoop hdfs using
>> > 'start-dfs.sh'
>> >
>> > 2013-01-23 03:25:43,208 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
>> > loaded in 0 seconds.
>> > 2013-01-23 03:25:43,209 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
>> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
>> > entries 0 lookups
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
>> > FSImage in 1692 msecs
>> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
>> > Socket Reader #1 for port 8020
>> > 2013-01-23 03:25:43,592 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> > FSNamesystemState MBean
>> > 2013-01-23 03:25:43,699 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for active state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
>> > server on 8020
>> > 2013-01-23 03:25:43,829 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>> > metrics system...
>> > 2013-01-23 03:25:43,831 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system stopped.
>> > 2013-01-23 03:25:43,832 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system shutdown complete.
>> > 2013-01-23 03:25:43,835 FATAL
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
>> > join
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>> >
>> >
>> > Thanks
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
Thanks. I don't believe we support Solaris 10 (i.e. we do not
intensively test over it), but what that piece behind the failure does
is execute "bash -c exec 'df -k /namedirpath'". If such a thing cannot
run on Solaris 10, thats probably the central issue for you right now
(though there may be other issues as well).

On Wed, Aug 28, 2013 at 1:39 PM, orahad bigdata <or...@gmail.com> wrote:
> Hi Harsh,
>
> I'm using Solaris 10 OS and java 1.6.
>
> Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
> hadoop user.
>
> Regards
> Jitendra
>
> On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> What OS are you starting this on?
>>
>> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
>> as user "hadoop"?
>>
>> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Hi All,
>> >
>> > I'm new in Hadoop administration, Can someone please help me?
>> >
>> > Hadoop-version :- 2.0.5 alpha and using QJM
>> >
>> > I'm getting below error messages while starting Hadoop hdfs using
>> > 'start-dfs.sh'
>> >
>> > 2013-01-23 03:25:43,208 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
>> > loaded in 0 seconds.
>> > 2013-01-23 03:25:43,209 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
>> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
>> > entries 0 lookups
>> > 2013-01-23 03:25:43,217 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
>> > FSImage in 1692 msecs
>> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
>> > Socket Reader #1 for port 8020
>> > 2013-01-23 03:25:43,592 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> > FSNamesystemState MBean
>> > 2013-01-23 03:25:43,699 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for active state
>> > 2013-01-23 03:25:43,822 INFO
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
>> > started for standby state
>> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
>> > server on 8020
>> > 2013-01-23 03:25:43,829 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>> > metrics system...
>> > 2013-01-23 03:25:43,831 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system stopped.
>> > 2013-01-23 03:25:43,832 INFO
>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>> > system shutdown complete.
>> > 2013-01-23 03:25:43,835 FATAL
>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
>> > join
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>> >         at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>> >
>> >
>> > Thanks
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by orahad bigdata <or...@gmail.com>.
Hi Harsh,

I'm using Solaris 10 OS and java 1.6.

Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
hadoop user.

Regards
Jitendra
On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:

> What OS are you starting this on?
>
> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
> as user "hadoop"?
>
> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Hi All,
> >
> > I'm new in Hadoop administration, Can someone please help me?
> >
> > Hadoop-version :- 2.0.5 alpha and using QJM
> >
> > I'm getting below error messages while starting Hadoop hdfs using
> 'start-dfs.sh'
> >
> > 2013-01-23 03:25:43,208 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> > loaded in 0 seconds.
> > 2013-01-23 03:25:43,209 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> > entries 0 lookups
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > FSImage in 1692 msecs
> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> > Socket Reader #1 for port 8020
> > 2013-01-23 03:25:43,592 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > FSNamesystemState MBean
> > 2013-01-23 03:25:43,699 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for active state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> > server on 8020
> > 2013-01-23 03:25:43,829 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> > metrics system...
> > 2013-01-23 03:25:43,831 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system stopped.
> > 2013-01-23 03:25:43,832 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system shutdown complete.
> > 2013-01-23 03:25:43,835 FATAL
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> > join
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
> >         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
> >
> >
> > Thanks
>
>
>
> --
> Harsh J
>

Re: Namenode joining error in HA configuration

Posted by orahad bigdata <or...@gmail.com>.
Hi Harsh,

I'm using Solaris 10 OS and java 1.6.

Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
hadoop user.

Regards
Jitendra
On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:

> What OS are you starting this on?
>
> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
> as user "hadoop"?
>
> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Hi All,
> >
> > I'm new in Hadoop administration, Can someone please help me?
> >
> > Hadoop-version :- 2.0.5 alpha and using QJM
> >
> > I'm getting below error messages while starting Hadoop hdfs using
> 'start-dfs.sh'
> >
> > 2013-01-23 03:25:43,208 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> > loaded in 0 seconds.
> > 2013-01-23 03:25:43,209 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> > entries 0 lookups
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > FSImage in 1692 msecs
> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> > Socket Reader #1 for port 8020
> > 2013-01-23 03:25:43,592 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > FSNamesystemState MBean
> > 2013-01-23 03:25:43,699 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for active state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> > server on 8020
> > 2013-01-23 03:25:43,829 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> > metrics system...
> > 2013-01-23 03:25:43,831 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system stopped.
> > 2013-01-23 03:25:43,832 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system shutdown complete.
> > 2013-01-23 03:25:43,835 FATAL
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> > join
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
> >         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
> >
> >
> > Thanks
>
>
>
> --
> Harsh J
>

Re: Namenode joining error in HA configuration

Posted by orahad bigdata <or...@gmail.com>.
Hi Harsh,

I'm using Solaris 10 OS and java 1.6.

Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
hadoop user.

Regards
Jitendra
On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:

> What OS are you starting this on?
>
> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
> as user "hadoop"?
>
> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Hi All,
> >
> > I'm new in Hadoop administration, Can someone please help me?
> >
> > Hadoop-version :- 2.0.5 alpha and using QJM
> >
> > I'm getting below error messages while starting Hadoop hdfs using
> 'start-dfs.sh'
> >
> > 2013-01-23 03:25:43,208 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> > loaded in 0 seconds.
> > 2013-01-23 03:25:43,209 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> > entries 0 lookups
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > FSImage in 1692 msecs
> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> > Socket Reader #1 for port 8020
> > 2013-01-23 03:25:43,592 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > FSNamesystemState MBean
> > 2013-01-23 03:25:43,699 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for active state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> > server on 8020
> > 2013-01-23 03:25:43,829 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> > metrics system...
> > 2013-01-23 03:25:43,831 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system stopped.
> > 2013-01-23 03:25:43,832 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system shutdown complete.
> > 2013-01-23 03:25:43,835 FATAL
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> > join
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
> >         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
> >
> >
> > Thanks
>
>
>
> --
> Harsh J
>

Re: Namenode joining error in HA configuration

Posted by orahad bigdata <or...@gmail.com>.
Hi Harsh,

I'm using Solaris 10 OS and java 1.6.

Yes I'm able to run df command against /tmp/hadoop-hadoop/dfs/name dir as
hadoop user.

Regards
Jitendra
On Wed, Aug 28, 2013 at 4:06 AM, Harsh J <ha...@cloudera.com> wrote:

> What OS are you starting this on?
>
> Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
> as user "hadoop"?
>
> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Hi All,
> >
> > I'm new in Hadoop administration, Can someone please help me?
> >
> > Hadoop-version :- 2.0.5 alpha and using QJM
> >
> > I'm getting below error messages while starting Hadoop hdfs using
> 'start-dfs.sh'
> >
> > 2013-01-23 03:25:43,208 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> > loaded in 0 seconds.
> > 2013-01-23 03:25:43,209 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> > entries 0 lookups
> > 2013-01-23 03:25:43,217 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > FSImage in 1692 msecs
> > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> > Socket Reader #1 for port 8020
> > 2013-01-23 03:25:43,592 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > FSNamesystemState MBean
> > 2013-01-23 03:25:43,699 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for active state
> > 2013-01-23 03:25:43,822 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> > started for standby state
> > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> > server on 8020
> > 2013-01-23 03:25:43,829 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> > metrics system...
> > 2013-01-23 03:25:43,831 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system stopped.
> > 2013-01-23 03:25:43,832 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system shutdown complete.
> > 2013-01-23 03:25:43,835 FATAL
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> > join
> > org.apache.hadoop.util.Shell$ExitCodeException:
> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >         at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
> >         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
> >
> >
> > Thanks
>
>
>
> --
> Harsh J
>

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
What OS are you starting this on?

Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
as user "hadoop"?

On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm new in Hadoop administration, Can someone please help me?
>
> Hadoop-version :- 2.0.5 alpha and using QJM
>
> I'm getting below error messages while starting Hadoop hdfs using 'start-dfs.sh'
>
> 2013-01-23 03:25:43,208 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> loaded in 0 seconds.
> 2013-01-23 03:25:43,209 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> entries 0 lookups
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 1692 msecs
> 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> Socket Reader #1 for port 8020
> 2013-01-23 03:25:43,592 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemState MBean
> 2013-01-23 03:25:43,699 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for active state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> server on 8020
> 2013-01-23 03:25:43,829 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2013-01-23 03:25:43,831 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system stopped.
> 2013-01-23 03:25:43,832 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system shutdown complete.
> 2013-01-23 03:25:43,835 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> join
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>
>
> Thanks



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
What OS are you starting this on?

Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
as user "hadoop"?

On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm new in Hadoop administration, Can someone please help me?
>
> Hadoop-version :- 2.0.5 alpha and using QJM
>
> I'm getting below error messages while starting Hadoop hdfs using 'start-dfs.sh'
>
> 2013-01-23 03:25:43,208 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> loaded in 0 seconds.
> 2013-01-23 03:25:43,209 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> entries 0 lookups
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 1692 msecs
> 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> Socket Reader #1 for port 8020
> 2013-01-23 03:25:43,592 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemState MBean
> 2013-01-23 03:25:43,699 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for active state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> server on 8020
> 2013-01-23 03:25:43,829 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2013-01-23 03:25:43,831 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system stopped.
> 2013-01-23 03:25:43,832 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system shutdown complete.
> 2013-01-23 03:25:43,835 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> join
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>
>
> Thanks



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
What OS are you starting this on?

Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
as user "hadoop"?

On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm new in Hadoop administration, Can someone please help me?
>
> Hadoop-version :- 2.0.5 alpha and using QJM
>
> I'm getting below error messages while starting Hadoop hdfs using 'start-dfs.sh'
>
> 2013-01-23 03:25:43,208 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> loaded in 0 seconds.
> 2013-01-23 03:25:43,209 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> entries 0 lookups
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 1692 msecs
> 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> Socket Reader #1 for port 8020
> 2013-01-23 03:25:43,592 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemState MBean
> 2013-01-23 03:25:43,699 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for active state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> server on 8020
> 2013-01-23 03:25:43,829 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2013-01-23 03:25:43,831 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system stopped.
> 2013-01-23 03:25:43,832 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system shutdown complete.
> 2013-01-23 03:25:43,835 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> join
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>
>
> Thanks



-- 
Harsh J

Re: Namenode joining error in HA configuration

Posted by Harsh J <ha...@cloudera.com>.
What OS are you starting this on?

Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/"
as user "hadoop"?

On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm new in Hadoop administration, Can someone please help me?
>
> Hadoop-version :- 2.0.5 alpha and using QJM
>
> I'm getting below error messages while starting Hadoop hdfs using 'start-dfs.sh'
>
> 2013-01-23 03:25:43,208 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
> loaded in 0 seconds.
> 2013-01-23 03:25:43,209 INFO
> org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid
> 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> entries 0 lookups
> 2013-01-23 03:25:43,217 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> FSImage in 1692 msecs
> 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting
> Socket Reader #1 for port 8020
> 2013-01-23 03:25:43,592 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemState MBean
> 2013-01-23 03:25:43,699 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for active state
> 2013-01-23 03:25:43,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services
> started for standby state
> 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping
> server on 8020
> 2013-01-23 03:25:43,829 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2013-01-23 03:25:43,831 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system stopped.
> 2013-01-23 03:25:43,832 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system shutdown complete.
> 2013-01-23 03:25:43,835 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
> join
> org.apache.hadoop.util.Shell$ExitCodeException:
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>         at org.apache.hadoop.util.Shell.run(Shell.java:129)
>         at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448)
>
>
> Thanks



-- 
Harsh J